00:00:00.001 Started by upstream project "autotest-spdk-master-vs-dpdk-v22.11" build number 2461 00:00:00.001 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3726 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.062 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.063 The recommended git tool is: git 00:00:00.063 using credential 00000000-0000-0000-0000-000000000002 00:00:00.064 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.087 Fetching changes from the remote Git repository 00:00:00.089 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.120 Using shallow fetch with depth 1 00:00:00.120 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.121 > git --version # timeout=10 00:00:00.161 > git --version # 'git version 2.39.2' 00:00:00.161 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.197 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.197 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:04.763 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:04.772 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:04.783 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:04.783 > git config core.sparsecheckout # timeout=10 00:00:04.792 > git read-tree -mu HEAD # timeout=10 00:00:04.807 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:04.828 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:04.829 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:04.911 [Pipeline] Start of Pipeline 00:00:04.922 [Pipeline] library 00:00:04.924 Loading library shm_lib@master 00:00:04.924 Library shm_lib@master is cached. Copying from home. 00:00:04.940 [Pipeline] node 00:00:04.967 Running on WFP4 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:04.968 [Pipeline] { 00:00:04.976 [Pipeline] catchError 00:00:04.978 [Pipeline] { 00:00:04.988 [Pipeline] wrap 00:00:04.994 [Pipeline] { 00:00:05.000 [Pipeline] stage 00:00:05.001 [Pipeline] { (Prologue) 00:00:05.198 [Pipeline] sh 00:00:06.000 + logger -p user.info -t JENKINS-CI 00:00:06.032 [Pipeline] echo 00:00:06.034 Node: WFP4 00:00:06.042 [Pipeline] sh 00:00:06.382 [Pipeline] setCustomBuildProperty 00:00:06.391 [Pipeline] echo 00:00:06.392 Cleanup processes 00:00:06.397 [Pipeline] sh 00:00:06.688 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.688 6569 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.714 [Pipeline] sh 00:00:07.015 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.015 ++ grep -v 'sudo pgrep' 00:00:07.015 ++ awk '{print $1}' 00:00:07.015 + sudo kill -9 00:00:07.015 + true 00:00:07.028 [Pipeline] cleanWs 00:00:07.037 [WS-CLEANUP] Deleting project workspace... 00:00:07.037 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.049 [WS-CLEANUP] done 00:00:07.052 [Pipeline] setCustomBuildProperty 00:00:07.061 [Pipeline] sh 00:00:07.371 + sudo git config --global --replace-all safe.directory '*' 00:00:07.469 [Pipeline] httpRequest 00:00:09.313 [Pipeline] echo 00:00:09.315 Sorcerer 10.211.164.20 is alive 00:00:09.323 [Pipeline] retry 00:00:09.325 [Pipeline] { 00:00:09.339 [Pipeline] httpRequest 00:00:09.343 HttpMethod: GET 00:00:09.344 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.345 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.355 Response Code: HTTP/1.1 200 OK 00:00:09.355 Success: Status code 200 is in the accepted range: 200,404 00:00:09.355 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.431 [Pipeline] } 00:00:11.448 [Pipeline] // retry 00:00:11.455 [Pipeline] sh 00:00:11.747 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:11.763 [Pipeline] httpRequest 00:00:12.296 [Pipeline] echo 00:00:12.298 Sorcerer 10.211.164.20 is alive 00:00:12.308 [Pipeline] retry 00:00:12.310 [Pipeline] { 00:00:12.325 [Pipeline] httpRequest 00:00:12.330 HttpMethod: GET 00:00:12.330 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:12.331 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:12.348 Response Code: HTTP/1.1 200 OK 00:00:12.349 Success: Status code 200 is in the accepted range: 200,404 00:00:12.349 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:58.365 [Pipeline] } 00:01:58.382 [Pipeline] // retry 00:01:58.389 [Pipeline] sh 00:01:58.682 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:02:01.239 [Pipeline] sh 00:02:01.531 + git -C spdk log --oneline -n5 00:02:01.531 e01cb43b8 mk/spdk.common.mk sed the minor version 00:02:01.531 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:02:01.531 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:02:01.531 66289a6db build: use VERSION file for storing version 00:02:01.531 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:02:01.559 [Pipeline] withCredentials 00:02:01.577 > git --version # timeout=10 00:02:01.607 > git --version # 'git version 2.39.2' 00:02:01.630 Masking supported pattern matches of $GIT_PASSWORD or $GIT_ASKPASS 00:02:01.632 [Pipeline] { 00:02:01.638 [Pipeline] retry 00:02:01.639 [Pipeline] { 00:02:01.651 [Pipeline] sh 00:02:02.166 + git ls-remote http://dpdk.org/git/dpdk-stable v22.11.4 00:02:02.439 [Pipeline] } 00:02:02.456 [Pipeline] // retry 00:02:02.461 [Pipeline] } 00:02:02.477 [Pipeline] // withCredentials 00:02:02.485 [Pipeline] httpRequest 00:02:03.112 [Pipeline] echo 00:02:03.114 Sorcerer 10.211.164.20 is alive 00:02:03.123 [Pipeline] retry 00:02:03.125 [Pipeline] { 00:02:03.139 [Pipeline] httpRequest 00:02:03.144 HttpMethod: GET 00:02:03.145 URL: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:03.146 Sending request to url: http://10.211.164.20/packages/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:03.150 Response Code: HTTP/1.1 200 OK 00:02:03.150 Success: Status code 200 is in the accepted range: 200,404 00:02:03.151 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:08.937 [Pipeline] } 00:02:08.952 [Pipeline] // retry 00:02:08.958 [Pipeline] sh 00:02:09.253 + tar --no-same-owner -xf dpdk_fee0f13c213d0584f0c42a51d0e0625d99a0b2f1.tar.gz 00:02:10.649 [Pipeline] sh 00:02:10.940 + git -C dpdk log --oneline -n5 00:02:10.940 caf0f5d395 version: 22.11.4 00:02:10.940 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:10.940 dc9c799c7d vhost: fix missing spinlock unlock 00:02:10.940 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:10.940 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:10.951 [Pipeline] } 00:02:10.964 [Pipeline] // stage 00:02:10.973 [Pipeline] stage 00:02:10.975 [Pipeline] { (Prepare) 00:02:10.991 [Pipeline] writeFile 00:02:11.004 [Pipeline] sh 00:02:11.291 + logger -p user.info -t JENKINS-CI 00:02:11.302 [Pipeline] sh 00:02:11.587 + logger -p user.info -t JENKINS-CI 00:02:11.598 [Pipeline] sh 00:02:11.882 + cat autorun-spdk.conf 00:02:11.882 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:11.882 SPDK_TEST_NVMF=1 00:02:11.882 SPDK_TEST_NVME_CLI=1 00:02:11.882 SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:11.882 SPDK_TEST_NVMF_NICS=e810 00:02:11.882 SPDK_TEST_VFIOUSER=1 00:02:11.882 SPDK_RUN_UBSAN=1 00:02:11.882 NET_TYPE=phy 00:02:11.882 SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:11.882 SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:11.890 RUN_NIGHTLY=1 00:02:11.895 [Pipeline] readFile 00:02:11.935 [Pipeline] withEnv 00:02:11.937 [Pipeline] { 00:02:11.948 [Pipeline] sh 00:02:12.234 + set -ex 00:02:12.234 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:02:12.234 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:12.234 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:12.234 ++ SPDK_TEST_NVMF=1 00:02:12.234 ++ SPDK_TEST_NVME_CLI=1 00:02:12.234 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:12.234 ++ SPDK_TEST_NVMF_NICS=e810 00:02:12.234 ++ SPDK_TEST_VFIOUSER=1 00:02:12.234 ++ SPDK_RUN_UBSAN=1 00:02:12.234 ++ NET_TYPE=phy 00:02:12.234 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:12.234 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:12.234 ++ RUN_NIGHTLY=1 00:02:12.235 + case $SPDK_TEST_NVMF_NICS in 00:02:12.235 + DRIVERS=ice 00:02:12.235 + [[ tcp == \r\d\m\a ]] 00:02:12.235 + [[ -n ice ]] 00:02:12.235 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:02:12.235 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:02:12.235 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:02:12.235 rmmod: ERROR: Module i40iw is not currently loaded 00:02:12.235 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:02:12.235 + true 00:02:12.235 + for D in $DRIVERS 00:02:12.235 + sudo modprobe ice 00:02:12.235 + exit 0 00:02:12.244 [Pipeline] } 00:02:12.258 [Pipeline] // withEnv 00:02:12.263 [Pipeline] } 00:02:12.277 [Pipeline] // stage 00:02:12.285 [Pipeline] catchError 00:02:12.287 [Pipeline] { 00:02:12.300 [Pipeline] timeout 00:02:12.300 Timeout set to expire in 1 hr 0 min 00:02:12.302 [Pipeline] { 00:02:12.315 [Pipeline] stage 00:02:12.317 [Pipeline] { (Tests) 00:02:12.331 [Pipeline] sh 00:02:12.620 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:12.620 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:12.620 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:12.620 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:02:12.620 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:12.620 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:12.620 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:02:12.620 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:12.620 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:02:12.620 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:02:12.620 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:02:12.620 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:02:12.620 + source /etc/os-release 00:02:12.620 ++ NAME='Fedora Linux' 00:02:12.620 ++ VERSION='39 (Cloud Edition)' 00:02:12.620 ++ ID=fedora 00:02:12.620 ++ VERSION_ID=39 00:02:12.620 ++ VERSION_CODENAME= 00:02:12.620 ++ PLATFORM_ID=platform:f39 00:02:12.620 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:02:12.620 ++ ANSI_COLOR='0;38;2;60;110;180' 00:02:12.620 ++ LOGO=fedora-logo-icon 00:02:12.620 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:02:12.620 ++ HOME_URL=https://fedoraproject.org/ 00:02:12.620 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:02:12.620 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:02:12.620 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:02:12.620 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:02:12.620 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:02:12.620 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:02:12.620 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:02:12.620 ++ SUPPORT_END=2024-11-12 00:02:12.620 ++ VARIANT='Cloud Edition' 00:02:12.620 ++ VARIANT_ID=cloud 00:02:12.620 + uname -a 00:02:12.620 Linux spdk-wfp-04 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 05:41:37 UTC 2024 x86_64 GNU/Linux 00:02:12.620 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:15.162 Hugepages 00:02:15.162 node hugesize free / total 00:02:15.162 node0 1048576kB 0 / 0 00:02:15.162 node0 2048kB 0 / 0 00:02:15.162 node1 1048576kB 0 / 0 00:02:15.162 node1 2048kB 0 / 0 00:02:15.162 00:02:15.162 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:15.162 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:15.162 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:15.162 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:15.162 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:15.162 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:15.162 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:15.162 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:15.162 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:15.162 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:15.162 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:15.162 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:15.162 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:15.162 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:15.162 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:15.162 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:15.162 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:15.162 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:15.162 + rm -f /tmp/spdk-ld-path 00:02:15.162 + source autorun-spdk.conf 00:02:15.162 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.162 ++ SPDK_TEST_NVMF=1 00:02:15.162 ++ SPDK_TEST_NVME_CLI=1 00:02:15.162 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:15.162 ++ SPDK_TEST_NVMF_NICS=e810 00:02:15.162 ++ SPDK_TEST_VFIOUSER=1 00:02:15.162 ++ SPDK_RUN_UBSAN=1 00:02:15.162 ++ NET_TYPE=phy 00:02:15.162 ++ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:15.162 ++ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:15.162 ++ RUN_NIGHTLY=1 00:02:15.162 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:15.162 + [[ -n '' ]] 00:02:15.162 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:15.162 + for M in /var/spdk/build-*-manifest.txt 00:02:15.162 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:02:15.162 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:15.162 + for M in /var/spdk/build-*-manifest.txt 00:02:15.162 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:15.162 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:15.162 + for M in /var/spdk/build-*-manifest.txt 00:02:15.162 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:15.162 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:02:15.162 ++ uname 00:02:15.162 + [[ Linux == \L\i\n\u\x ]] 00:02:15.162 + sudo dmesg -T 00:02:15.162 + sudo dmesg --clear 00:02:15.162 + dmesg_pid=7537 00:02:15.162 + [[ Fedora Linux == FreeBSD ]] 00:02:15.162 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.162 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:15.162 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:15.162 + sudo dmesg -Tw 00:02:15.162 + [[ -x /usr/src/fio-static/fio ]] 00:02:15.162 + export FIO_BIN=/usr/src/fio-static/fio 00:02:15.162 + FIO_BIN=/usr/src/fio-static/fio 00:02:15.162 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:15.162 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:15.162 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:15.162 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:15.162 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:02:15.162 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:15.162 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:15.162 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:02:15.162 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:15.424 05:02:28 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:15.424 05:02:28 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:15.424 05:02:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:15.424 05:02:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:02:15.424 05:02:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:02:15.424 05:02:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:02:15.424 05:02:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:02:15.424 05:02:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:02:15.424 05:02:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:02:15.424 05:02:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:02:15.424 05:02:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ SPDK_TEST_NATIVE_DPDK=v22.11.4 00:02:15.424 05:02:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@10 -- $ SPDK_RUN_EXTERNAL_DPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:15.424 05:02:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@11 -- $ RUN_NIGHTLY=1 00:02:15.424 05:02:28 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:02:15.424 05:02:28 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:15.424 05:02:28 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:02:15.424 05:02:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:15.424 05:02:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:02:15.424 05:02:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:15.424 05:02:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:15.424 05:02:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:15.424 05:02:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.424 05:02:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.424 05:02:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.424 05:02:28 -- paths/export.sh@5 -- $ export PATH 00:02:15.424 05:02:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:15.424 05:02:28 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:15.424 05:02:28 -- common/autobuild_common.sh@493 -- $ date +%s 00:02:15.424 05:02:28 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734235348.XXXXXX 00:02:15.424 05:02:28 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734235348.je3hsl 00:02:15.424 05:02:28 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:02:15.424 05:02:28 -- common/autobuild_common.sh@499 -- $ '[' -n v22.11.4 ']' 00:02:15.424 05:02:28 -- common/autobuild_common.sh@500 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:15.424 05:02:28 -- common/autobuild_common.sh@500 -- $ scanbuild_exclude=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk' 00:02:15.424 05:02:28 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:02:15.424 05:02:28 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:02:15.424 05:02:28 -- common/autobuild_common.sh@509 -- $ get_config_params 00:02:15.424 05:02:28 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:02:15.424 05:02:28 -- common/autotest_common.sh@10 -- $ set +x 00:02:15.424 05:02:28 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build' 00:02:15.424 05:02:28 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:02:15.424 05:02:28 -- pm/common@17 -- $ local monitor 00:02:15.424 05:02:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.424 05:02:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.424 05:02:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.424 05:02:28 -- pm/common@21 -- $ date +%s 00:02:15.424 05:02:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:15.424 05:02:28 -- pm/common@21 -- $ date +%s 00:02:15.424 05:02:28 -- pm/common@25 -- $ sleep 1 00:02:15.424 05:02:28 -- pm/common@21 -- $ date +%s 00:02:15.424 05:02:28 -- pm/common@21 -- $ date +%s 00:02:15.424 05:02:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734235348 00:02:15.424 05:02:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734235348 00:02:15.424 05:02:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734235348 00:02:15.424 05:02:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1734235348 00:02:15.424 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734235348_collect-cpu-temp.pm.log 00:02:15.424 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734235348_collect-vmstat.pm.log 00:02:15.424 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734235348_collect-cpu-load.pm.log 00:02:15.425 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1734235348_collect-bmc-pm.bmc.pm.log 00:02:16.365 05:02:29 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:02:16.365 05:02:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:16.365 05:02:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:16.365 05:02:29 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.365 05:02:29 -- spdk/autobuild.sh@16 -- $ date -u 00:02:16.365 Sun Dec 15 04:02:29 AM UTC 2024 00:02:16.365 05:02:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:16.365 v25.01-rc1-2-ge01cb43b8 00:02:16.365 05:02:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:16.365 05:02:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:02:16.365 05:02:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:02:16.365 05:02:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:16.365 05:02:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:16.365 05:02:29 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.365 ************************************ 00:02:16.365 START TEST ubsan 00:02:16.365 ************************************ 00:02:16.365 05:02:30 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:02:16.365 using ubsan 00:02:16.365 00:02:16.365 real 0m0.000s 00:02:16.365 user 0m0.000s 00:02:16.365 sys 0m0.000s 00:02:16.365 05:02:30 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:16.365 05:02:30 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:02:16.365 ************************************ 00:02:16.365 END TEST ubsan 00:02:16.365 ************************************ 00:02:16.627 05:02:30 -- spdk/autobuild.sh@27 -- $ '[' -n v22.11.4 ']' 00:02:16.627 05:02:30 -- spdk/autobuild.sh@28 -- $ build_native_dpdk 00:02:16.627 05:02:30 -- common/autobuild_common.sh@449 -- $ run_test build_native_dpdk _build_native_dpdk 00:02:16.627 05:02:30 -- common/autotest_common.sh@1105 -- $ '[' 2 -le 1 ']' 00:02:16.627 05:02:30 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:16.627 05:02:30 -- common/autotest_common.sh@10 -- $ set +x 00:02:16.627 ************************************ 00:02:16.627 START TEST build_native_dpdk 00:02:16.627 ************************************ 00:02:16.627 05:02:30 build_native_dpdk -- common/autotest_common.sh@1129 -- $ _build_native_dpdk 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@48 -- $ local external_dpdk_dir 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@49 -- $ local external_dpdk_base_dir 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@50 -- $ local compiler_version 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@51 -- $ local compiler 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@52 -- $ local dpdk_kmods 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@53 -- $ local repo=dpdk 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@55 -- $ compiler=gcc 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ export CC=gcc 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@61 -- $ CC=gcc 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *clang* ]] 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@63 -- $ [[ gcc != *gcc* ]] 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ gcc -dumpversion 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@68 -- $ compiler_version=13 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@69 -- $ compiler_version=13 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@70 -- $ external_dpdk_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@71 -- $ external_dpdk_base_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@73 -- $ [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk ]] 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@82 -- $ orgdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@83 -- $ git -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk log --oneline -n 5 00:02:16.627 caf0f5d395 version: 22.11.4 00:02:16.627 7d6f1cc05f Revert "net/iavf: fix abnormal disable HW interrupt" 00:02:16.627 dc9c799c7d vhost: fix missing spinlock unlock 00:02:16.627 4307659a90 net/mlx5: fix LACP redirection in Rx domain 00:02:16.627 6ef77f2a5e net/gve: fix RX buffer size alignment 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@85 -- $ dpdk_cflags='-fPIC -g -fcommon' 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@86 -- $ dpdk_ldflags= 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@87 -- $ dpdk_ver=22.11.4 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ gcc == *gcc* ]] 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@89 -- $ [[ 13 -ge 5 ]] 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@90 -- $ dpdk_cflags+=' -Werror' 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ gcc == *gcc* ]] 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@93 -- $ [[ 13 -ge 10 ]] 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@94 -- $ dpdk_cflags+=' -Wno-stringop-overflow' 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@102 -- $ DPDK_DRIVERS=("bus" "bus/pci" "bus/vdev" "mempool/ring" "net/i40e" "net/i40e/base" "power/acpi" "power/amd_pstate" "power/cppc" "power/intel_pstate" "power/intel_uncore" "power/kvm_vm") 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@103 -- $ local mlx5_libs_added=n 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@104 -- $ [[ 0 -eq 1 ]] 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@146 -- $ [[ 0 -eq 1 ]] 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@174 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@175 -- $ uname -s 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@175 -- $ '[' Linux = Linux ']' 00:02:16.627 05:02:30 build_native_dpdk -- common/autobuild_common.sh@176 -- $ lt 22.11.4 21.11.0 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 21.11.0 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:16.627 05:02:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 21 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=21 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 21 =~ ^[0-9]+$ ]] 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 21 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=21 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@367 -- $ return 1 00:02:16.628 05:02:30 build_native_dpdk -- common/autobuild_common.sh@180 -- $ patch -p1 00:02:16.628 patching file config/rte_config.h 00:02:16.628 Hunk #1 succeeded at 60 (offset 1 line). 00:02:16.628 05:02:30 build_native_dpdk -- common/autobuild_common.sh@183 -- $ lt 22.11.4 24.07.0 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@373 -- $ cmp_versions 22.11.4 '<' 24.07.0 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=<' 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@345 -- $ : 1 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@368 -- $ return 0 00:02:16.628 05:02:30 build_native_dpdk -- common/autobuild_common.sh@184 -- $ patch -p1 00:02:16.628 patching file lib/pcapng/rte_pcapng.c 00:02:16.628 Hunk #1 succeeded at 110 (offset -18 lines). 00:02:16.628 05:02:30 build_native_dpdk -- common/autobuild_common.sh@186 -- $ ge 22.11.4 24.07.0 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@376 -- $ cmp_versions 22.11.4 '>=' 24.07.0 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@333 -- $ local ver1 ver1_l 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@334 -- $ local ver2 ver2_l 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@336 -- $ IFS=.-: 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@336 -- $ read -ra ver1 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@337 -- $ IFS=.-: 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@337 -- $ read -ra ver2 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@338 -- $ local 'op=>=' 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@340 -- $ ver1_l=3 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@341 -- $ ver2_l=3 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@343 -- $ local lt=0 gt=0 eq=0 v 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@344 -- $ case "$op" in 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@348 -- $ : 1 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v = 0 )) 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@364 -- $ (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@365 -- $ decimal 22 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=22 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 22 =~ ^[0-9]+$ ]] 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 22 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@365 -- $ ver1[v]=22 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@366 -- $ decimal 24 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@353 -- $ local d=24 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@354 -- $ [[ 24 =~ ^[0-9]+$ ]] 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@355 -- $ echo 24 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@366 -- $ ver2[v]=24 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@367 -- $ (( ver1[v] > ver2[v] )) 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@368 -- $ (( ver1[v] < ver2[v] )) 00:02:16.628 05:02:30 build_native_dpdk -- scripts/common.sh@368 -- $ return 1 00:02:16.628 05:02:30 build_native_dpdk -- common/autobuild_common.sh@190 -- $ dpdk_kmods=false 00:02:16.628 05:02:30 build_native_dpdk -- common/autobuild_common.sh@191 -- $ uname -s 00:02:16.628 05:02:30 build_native_dpdk -- common/autobuild_common.sh@191 -- $ '[' Linux = FreeBSD ']' 00:02:16.628 05:02:30 build_native_dpdk -- common/autobuild_common.sh@195 -- $ printf %s, bus bus/pci bus/vdev mempool/ring net/i40e net/i40e/base power/acpi power/amd_pstate power/cppc power/intel_pstate power/intel_uncore power/kvm_vm 00:02:16.628 05:02:30 build_native_dpdk -- common/autobuild_common.sh@195 -- $ meson build-tmp --prefix=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --libdir lib -Denable_docs=false -Denable_kmods=false -Dtests=false -Dc_link_args= '-Dc_args=-fPIC -g -fcommon -Werror -Wno-stringop-overflow' -Dmachine=native -Denable_drivers=bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:23.204 The Meson build system 00:02:23.204 Version: 1.5.0 00:02:23.204 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk 00:02:23.204 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp 00:02:23.204 Build type: native build 00:02:23.204 Program cat found: YES (/usr/bin/cat) 00:02:23.204 Project name: DPDK 00:02:23.204 Project version: 22.11.4 00:02:23.204 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:23.204 C linker for the host machine: gcc ld.bfd 2.40-14 00:02:23.204 Host machine cpu family: x86_64 00:02:23.204 Host machine cpu: x86_64 00:02:23.204 Message: ## Building in Developer Mode ## 00:02:23.204 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:23.204 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/check-symbols.sh) 00:02:23.204 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/buildtools/options-ibverbs-static.sh) 00:02:23.204 Program objdump found: YES (/usr/bin/objdump) 00:02:23.204 Program python3 found: YES (/usr/bin/python3) 00:02:23.204 Program cat found: YES (/usr/bin/cat) 00:02:23.204 config/meson.build:83: WARNING: The "machine" option is deprecated. Please use "cpu_instruction_set" instead. 00:02:23.204 Checking for size of "void *" : 8 00:02:23.204 Checking for size of "void *" : 8 (cached) 00:02:23.204 Library m found: YES 00:02:23.204 Library numa found: YES 00:02:23.204 Has header "numaif.h" : YES 00:02:23.204 Library fdt found: NO 00:02:23.204 Library execinfo found: NO 00:02:23.204 Has header "execinfo.h" : YES 00:02:23.204 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:23.204 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:23.204 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:23.204 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:23.204 Run-time dependency openssl found: YES 3.1.1 00:02:23.204 Run-time dependency libpcap found: YES 1.10.4 00:02:23.204 Has header "pcap.h" with dependency libpcap: YES 00:02:23.204 Compiler for C supports arguments -Wcast-qual: YES 00:02:23.204 Compiler for C supports arguments -Wdeprecated: YES 00:02:23.204 Compiler for C supports arguments -Wformat: YES 00:02:23.204 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:23.204 Compiler for C supports arguments -Wformat-security: NO 00:02:23.204 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:23.204 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:23.204 Compiler for C supports arguments -Wnested-externs: YES 00:02:23.204 Compiler for C supports arguments -Wold-style-definition: YES 00:02:23.204 Compiler for C supports arguments -Wpointer-arith: YES 00:02:23.204 Compiler for C supports arguments -Wsign-compare: YES 00:02:23.204 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:23.204 Compiler for C supports arguments -Wundef: YES 00:02:23.204 Compiler for C supports arguments -Wwrite-strings: YES 00:02:23.204 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:23.204 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:23.204 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:23.204 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:23.204 Compiler for C supports arguments -mavx512f: YES 00:02:23.204 Checking if "AVX512 checking" compiles: YES 00:02:23.204 Fetching value of define "__SSE4_2__" : 1 00:02:23.204 Fetching value of define "__AES__" : 1 00:02:23.204 Fetching value of define "__AVX__" : 1 00:02:23.204 Fetching value of define "__AVX2__" : 1 00:02:23.204 Fetching value of define "__AVX512BW__" : 1 00:02:23.204 Fetching value of define "__AVX512CD__" : 1 00:02:23.204 Fetching value of define "__AVX512DQ__" : 1 00:02:23.204 Fetching value of define "__AVX512F__" : 1 00:02:23.204 Fetching value of define "__AVX512VL__" : 1 00:02:23.204 Fetching value of define "__PCLMUL__" : 1 00:02:23.205 Fetching value of define "__RDRND__" : 1 00:02:23.205 Fetching value of define "__RDSEED__" : 1 00:02:23.205 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:23.205 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:23.205 Message: lib/kvargs: Defining dependency "kvargs" 00:02:23.205 Message: lib/telemetry: Defining dependency "telemetry" 00:02:23.205 Checking for function "getentropy" : YES 00:02:23.205 Message: lib/eal: Defining dependency "eal" 00:02:23.205 Message: lib/ring: Defining dependency "ring" 00:02:23.205 Message: lib/rcu: Defining dependency "rcu" 00:02:23.205 Message: lib/mempool: Defining dependency "mempool" 00:02:23.205 Message: lib/mbuf: Defining dependency "mbuf" 00:02:23.205 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:23.205 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.205 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.205 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.205 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.205 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:23.205 Compiler for C supports arguments -mpclmul: YES 00:02:23.205 Compiler for C supports arguments -maes: YES 00:02:23.205 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.205 Compiler for C supports arguments -mavx512bw: YES 00:02:23.205 Compiler for C supports arguments -mavx512dq: YES 00:02:23.205 Compiler for C supports arguments -mavx512vl: YES 00:02:23.205 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:23.205 Compiler for C supports arguments -mavx2: YES 00:02:23.205 Compiler for C supports arguments -mavx: YES 00:02:23.205 Message: lib/net: Defining dependency "net" 00:02:23.205 Message: lib/meter: Defining dependency "meter" 00:02:23.205 Message: lib/ethdev: Defining dependency "ethdev" 00:02:23.205 Message: lib/pci: Defining dependency "pci" 00:02:23.205 Message: lib/cmdline: Defining dependency "cmdline" 00:02:23.205 Message: lib/metrics: Defining dependency "metrics" 00:02:23.205 Message: lib/hash: Defining dependency "hash" 00:02:23.205 Message: lib/timer: Defining dependency "timer" 00:02:23.205 Fetching value of define "__AVX2__" : 1 (cached) 00:02:23.205 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.205 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:23.205 Fetching value of define "__AVX512CD__" : 1 (cached) 00:02:23.205 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.205 Message: lib/acl: Defining dependency "acl" 00:02:23.205 Message: lib/bbdev: Defining dependency "bbdev" 00:02:23.205 Message: lib/bitratestats: Defining dependency "bitratestats" 00:02:23.205 Run-time dependency libelf found: YES 0.191 00:02:23.205 Message: lib/bpf: Defining dependency "bpf" 00:02:23.205 Message: lib/cfgfile: Defining dependency "cfgfile" 00:02:23.205 Message: lib/compressdev: Defining dependency "compressdev" 00:02:23.205 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:23.205 Message: lib/distributor: Defining dependency "distributor" 00:02:23.205 Message: lib/efd: Defining dependency "efd" 00:02:23.205 Message: lib/eventdev: Defining dependency "eventdev" 00:02:23.205 Message: lib/gpudev: Defining dependency "gpudev" 00:02:23.205 Message: lib/gro: Defining dependency "gro" 00:02:23.205 Message: lib/gso: Defining dependency "gso" 00:02:23.205 Message: lib/ip_frag: Defining dependency "ip_frag" 00:02:23.205 Message: lib/jobstats: Defining dependency "jobstats" 00:02:23.205 Message: lib/latencystats: Defining dependency "latencystats" 00:02:23.205 Message: lib/lpm: Defining dependency "lpm" 00:02:23.205 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.205 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.205 Fetching value of define "__AVX512IFMA__" : (undefined) 00:02:23.205 Compiler for C supports arguments -mavx512f -mavx512dq -mavx512ifma: YES 00:02:23.205 Message: lib/member: Defining dependency "member" 00:02:23.205 Message: lib/pcapng: Defining dependency "pcapng" 00:02:23.205 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:23.205 Message: lib/power: Defining dependency "power" 00:02:23.205 Message: lib/rawdev: Defining dependency "rawdev" 00:02:23.205 Message: lib/regexdev: Defining dependency "regexdev" 00:02:23.205 Message: lib/dmadev: Defining dependency "dmadev" 00:02:23.205 Message: lib/rib: Defining dependency "rib" 00:02:23.205 Message: lib/reorder: Defining dependency "reorder" 00:02:23.205 Message: lib/sched: Defining dependency "sched" 00:02:23.205 Message: lib/security: Defining dependency "security" 00:02:23.205 Message: lib/stack: Defining dependency "stack" 00:02:23.205 Has header "linux/userfaultfd.h" : YES 00:02:23.205 Message: lib/vhost: Defining dependency "vhost" 00:02:23.205 Message: lib/ipsec: Defining dependency "ipsec" 00:02:23.205 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.205 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:23.205 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.205 Message: lib/fib: Defining dependency "fib" 00:02:23.205 Message: lib/port: Defining dependency "port" 00:02:23.205 Message: lib/pdump: Defining dependency "pdump" 00:02:23.205 Message: lib/table: Defining dependency "table" 00:02:23.205 Message: lib/pipeline: Defining dependency "pipeline" 00:02:23.205 Message: lib/graph: Defining dependency "graph" 00:02:23.205 Message: lib/node: Defining dependency "node" 00:02:23.205 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:23.205 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:23.205 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:23.205 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:23.205 Compiler for C supports arguments -Wno-sign-compare: YES 00:02:23.205 Compiler for C supports arguments -Wno-unused-value: YES 00:02:23.205 Compiler for C supports arguments -Wno-format: YES 00:02:23.205 Compiler for C supports arguments -Wno-format-security: YES 00:02:23.205 Compiler for C supports arguments -Wno-format-nonliteral: YES 00:02:23.466 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:23.466 Compiler for C supports arguments -Wno-unused-but-set-variable: YES 00:02:23.466 Compiler for C supports arguments -Wno-unused-parameter: YES 00:02:23.466 Fetching value of define "__AVX2__" : 1 (cached) 00:02:23.466 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:23.466 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:23.466 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:23.466 Compiler for C supports arguments -mavx512bw: YES (cached) 00:02:23.466 Compiler for C supports arguments -march=skylake-avx512: YES 00:02:23.466 Message: drivers/net/i40e: Defining dependency "net_i40e" 00:02:23.466 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:23.466 Configuring doxy-api.conf using configuration 00:02:23.466 Program sphinx-build found: NO 00:02:23.466 Configuring rte_build_config.h using configuration 00:02:23.466 Message: 00:02:23.466 ================= 00:02:23.466 Applications Enabled 00:02:23.466 ================= 00:02:23.466 00:02:23.466 apps: 00:02:23.466 dumpcap, pdump, proc-info, test-acl, test-bbdev, test-cmdline, test-compress-perf, test-crypto-perf, 00:02:23.466 test-eventdev, test-fib, test-flow-perf, test-gpudev, test-pipeline, test-pmd, test-regex, test-sad, 00:02:23.466 test-security-perf, 00:02:23.466 00:02:23.466 Message: 00:02:23.466 ================= 00:02:23.466 Libraries Enabled 00:02:23.466 ================= 00:02:23.466 00:02:23.466 libs: 00:02:23.466 kvargs, telemetry, eal, ring, rcu, mempool, mbuf, net, 00:02:23.466 meter, ethdev, pci, cmdline, metrics, hash, timer, acl, 00:02:23.466 bbdev, bitratestats, bpf, cfgfile, compressdev, cryptodev, distributor, efd, 00:02:23.466 eventdev, gpudev, gro, gso, ip_frag, jobstats, latencystats, lpm, 00:02:23.466 member, pcapng, power, rawdev, regexdev, dmadev, rib, reorder, 00:02:23.466 sched, security, stack, vhost, ipsec, fib, port, pdump, 00:02:23.466 table, pipeline, graph, node, 00:02:23.466 00:02:23.466 Message: 00:02:23.466 =============== 00:02:23.466 Drivers Enabled 00:02:23.466 =============== 00:02:23.466 00:02:23.466 common: 00:02:23.466 00:02:23.466 bus: 00:02:23.466 pci, vdev, 00:02:23.466 mempool: 00:02:23.466 ring, 00:02:23.466 dma: 00:02:23.466 00:02:23.466 net: 00:02:23.466 i40e, 00:02:23.466 raw: 00:02:23.466 00:02:23.466 crypto: 00:02:23.466 00:02:23.466 compress: 00:02:23.466 00:02:23.466 regex: 00:02:23.466 00:02:23.466 vdpa: 00:02:23.466 00:02:23.466 event: 00:02:23.466 00:02:23.466 baseband: 00:02:23.466 00:02:23.466 gpu: 00:02:23.466 00:02:23.466 00:02:23.466 Message: 00:02:23.466 ================= 00:02:23.466 Content Skipped 00:02:23.466 ================= 00:02:23.466 00:02:23.466 apps: 00:02:23.466 00:02:23.466 libs: 00:02:23.466 kni: explicitly disabled via build config (deprecated lib) 00:02:23.466 flow_classify: explicitly disabled via build config (deprecated lib) 00:02:23.466 00:02:23.466 drivers: 00:02:23.466 common/cpt: not in enabled drivers build config 00:02:23.466 common/dpaax: not in enabled drivers build config 00:02:23.466 common/iavf: not in enabled drivers build config 00:02:23.466 common/idpf: not in enabled drivers build config 00:02:23.466 common/mvep: not in enabled drivers build config 00:02:23.466 common/octeontx: not in enabled drivers build config 00:02:23.466 bus/auxiliary: not in enabled drivers build config 00:02:23.466 bus/dpaa: not in enabled drivers build config 00:02:23.466 bus/fslmc: not in enabled drivers build config 00:02:23.466 bus/ifpga: not in enabled drivers build config 00:02:23.466 bus/vmbus: not in enabled drivers build config 00:02:23.466 common/cnxk: not in enabled drivers build config 00:02:23.466 common/mlx5: not in enabled drivers build config 00:02:23.466 common/qat: not in enabled drivers build config 00:02:23.466 common/sfc_efx: not in enabled drivers build config 00:02:23.466 mempool/bucket: not in enabled drivers build config 00:02:23.466 mempool/cnxk: not in enabled drivers build config 00:02:23.466 mempool/dpaa: not in enabled drivers build config 00:02:23.466 mempool/dpaa2: not in enabled drivers build config 00:02:23.466 mempool/octeontx: not in enabled drivers build config 00:02:23.466 mempool/stack: not in enabled drivers build config 00:02:23.466 dma/cnxk: not in enabled drivers build config 00:02:23.466 dma/dpaa: not in enabled drivers build config 00:02:23.466 dma/dpaa2: not in enabled drivers build config 00:02:23.466 dma/hisilicon: not in enabled drivers build config 00:02:23.466 dma/idxd: not in enabled drivers build config 00:02:23.466 dma/ioat: not in enabled drivers build config 00:02:23.466 dma/skeleton: not in enabled drivers build config 00:02:23.466 net/af_packet: not in enabled drivers build config 00:02:23.466 net/af_xdp: not in enabled drivers build config 00:02:23.466 net/ark: not in enabled drivers build config 00:02:23.466 net/atlantic: not in enabled drivers build config 00:02:23.466 net/avp: not in enabled drivers build config 00:02:23.466 net/axgbe: not in enabled drivers build config 00:02:23.466 net/bnx2x: not in enabled drivers build config 00:02:23.466 net/bnxt: not in enabled drivers build config 00:02:23.466 net/bonding: not in enabled drivers build config 00:02:23.466 net/cnxk: not in enabled drivers build config 00:02:23.466 net/cxgbe: not in enabled drivers build config 00:02:23.466 net/dpaa: not in enabled drivers build config 00:02:23.466 net/dpaa2: not in enabled drivers build config 00:02:23.466 net/e1000: not in enabled drivers build config 00:02:23.466 net/ena: not in enabled drivers build config 00:02:23.466 net/enetc: not in enabled drivers build config 00:02:23.466 net/enetfec: not in enabled drivers build config 00:02:23.466 net/enic: not in enabled drivers build config 00:02:23.466 net/failsafe: not in enabled drivers build config 00:02:23.466 net/fm10k: not in enabled drivers build config 00:02:23.466 net/gve: not in enabled drivers build config 00:02:23.466 net/hinic: not in enabled drivers build config 00:02:23.466 net/hns3: not in enabled drivers build config 00:02:23.466 net/iavf: not in enabled drivers build config 00:02:23.466 net/ice: not in enabled drivers build config 00:02:23.466 net/idpf: not in enabled drivers build config 00:02:23.466 net/igc: not in enabled drivers build config 00:02:23.466 net/ionic: not in enabled drivers build config 00:02:23.466 net/ipn3ke: not in enabled drivers build config 00:02:23.466 net/ixgbe: not in enabled drivers build config 00:02:23.466 net/kni: not in enabled drivers build config 00:02:23.466 net/liquidio: not in enabled drivers build config 00:02:23.466 net/mana: not in enabled drivers build config 00:02:23.466 net/memif: not in enabled drivers build config 00:02:23.466 net/mlx4: not in enabled drivers build config 00:02:23.466 net/mlx5: not in enabled drivers build config 00:02:23.466 net/mvneta: not in enabled drivers build config 00:02:23.466 net/mvpp2: not in enabled drivers build config 00:02:23.466 net/netvsc: not in enabled drivers build config 00:02:23.466 net/nfb: not in enabled drivers build config 00:02:23.466 net/nfp: not in enabled drivers build config 00:02:23.466 net/ngbe: not in enabled drivers build config 00:02:23.466 net/null: not in enabled drivers build config 00:02:23.466 net/octeontx: not in enabled drivers build config 00:02:23.466 net/octeon_ep: not in enabled drivers build config 00:02:23.466 net/pcap: not in enabled drivers build config 00:02:23.466 net/pfe: not in enabled drivers build config 00:02:23.466 net/qede: not in enabled drivers build config 00:02:23.466 net/ring: not in enabled drivers build config 00:02:23.466 net/sfc: not in enabled drivers build config 00:02:23.466 net/softnic: not in enabled drivers build config 00:02:23.466 net/tap: not in enabled drivers build config 00:02:23.466 net/thunderx: not in enabled drivers build config 00:02:23.466 net/txgbe: not in enabled drivers build config 00:02:23.466 net/vdev_netvsc: not in enabled drivers build config 00:02:23.466 net/vhost: not in enabled drivers build config 00:02:23.466 net/virtio: not in enabled drivers build config 00:02:23.466 net/vmxnet3: not in enabled drivers build config 00:02:23.466 raw/cnxk_bphy: not in enabled drivers build config 00:02:23.466 raw/cnxk_gpio: not in enabled drivers build config 00:02:23.466 raw/dpaa2_cmdif: not in enabled drivers build config 00:02:23.466 raw/ifpga: not in enabled drivers build config 00:02:23.466 raw/ntb: not in enabled drivers build config 00:02:23.466 raw/skeleton: not in enabled drivers build config 00:02:23.466 crypto/armv8: not in enabled drivers build config 00:02:23.466 crypto/bcmfs: not in enabled drivers build config 00:02:23.466 crypto/caam_jr: not in enabled drivers build config 00:02:23.466 crypto/ccp: not in enabled drivers build config 00:02:23.466 crypto/cnxk: not in enabled drivers build config 00:02:23.466 crypto/dpaa_sec: not in enabled drivers build config 00:02:23.466 crypto/dpaa2_sec: not in enabled drivers build config 00:02:23.466 crypto/ipsec_mb: not in enabled drivers build config 00:02:23.466 crypto/mlx5: not in enabled drivers build config 00:02:23.466 crypto/mvsam: not in enabled drivers build config 00:02:23.466 crypto/nitrox: not in enabled drivers build config 00:02:23.466 crypto/null: not in enabled drivers build config 00:02:23.466 crypto/octeontx: not in enabled drivers build config 00:02:23.466 crypto/openssl: not in enabled drivers build config 00:02:23.466 crypto/scheduler: not in enabled drivers build config 00:02:23.466 crypto/uadk: not in enabled drivers build config 00:02:23.466 crypto/virtio: not in enabled drivers build config 00:02:23.466 compress/isal: not in enabled drivers build config 00:02:23.466 compress/mlx5: not in enabled drivers build config 00:02:23.466 compress/octeontx: not in enabled drivers build config 00:02:23.466 compress/zlib: not in enabled drivers build config 00:02:23.466 regex/mlx5: not in enabled drivers build config 00:02:23.466 regex/cn9k: not in enabled drivers build config 00:02:23.466 vdpa/ifc: not in enabled drivers build config 00:02:23.466 vdpa/mlx5: not in enabled drivers build config 00:02:23.466 vdpa/sfc: not in enabled drivers build config 00:02:23.466 event/cnxk: not in enabled drivers build config 00:02:23.466 event/dlb2: not in enabled drivers build config 00:02:23.466 event/dpaa: not in enabled drivers build config 00:02:23.466 event/dpaa2: not in enabled drivers build config 00:02:23.466 event/dsw: not in enabled drivers build config 00:02:23.466 event/opdl: not in enabled drivers build config 00:02:23.466 event/skeleton: not in enabled drivers build config 00:02:23.466 event/sw: not in enabled drivers build config 00:02:23.466 event/octeontx: not in enabled drivers build config 00:02:23.467 baseband/acc: not in enabled drivers build config 00:02:23.467 baseband/fpga_5gnr_fec: not in enabled drivers build config 00:02:23.467 baseband/fpga_lte_fec: not in enabled drivers build config 00:02:23.467 baseband/la12xx: not in enabled drivers build config 00:02:23.467 baseband/null: not in enabled drivers build config 00:02:23.467 baseband/turbo_sw: not in enabled drivers build config 00:02:23.467 gpu/cuda: not in enabled drivers build config 00:02:23.467 00:02:23.467 00:02:23.467 Build targets in project: 311 00:02:23.467 00:02:23.467 DPDK 22.11.4 00:02:23.467 00:02:23.467 User defined options 00:02:23.467 libdir : lib 00:02:23.467 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:02:23.467 c_args : -fPIC -g -fcommon -Werror -Wno-stringop-overflow 00:02:23.467 c_link_args : 00:02:23.467 enable_docs : false 00:02:23.467 enable_drivers: bus,bus/pci,bus/vdev,mempool/ring,net/i40e,net/i40e/base,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm, 00:02:23.467 enable_kmods : false 00:02:23.467 machine : native 00:02:23.467 tests : false 00:02:23.467 00:02:23.467 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.467 WARNING: Running the setup command as `meson [options]` instead of `meson setup [options]` is ambiguous and deprecated. 00:02:23.729 05:02:37 build_native_dpdk -- common/autobuild_common.sh@199 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 00:02:23.729 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:23.729 [1/740] Generating lib/rte_kvargs_mingw with a custom command 00:02:23.729 [2/740] Generating lib/rte_telemetry_def with a custom command 00:02:23.729 [3/740] Generating lib/rte_telemetry_mingw with a custom command 00:02:23.729 [4/740] Generating lib/rte_kvargs_def with a custom command 00:02:23.729 [5/740] Generating lib/rte_ring_def with a custom command 00:02:23.729 [6/740] Generating lib/rte_eal_def with a custom command 00:02:23.729 [7/740] Generating lib/rte_eal_mingw with a custom command 00:02:23.729 [8/740] Generating lib/rte_mempool_mingw with a custom command 00:02:23.729 [9/740] Generating lib/rte_mempool_def with a custom command 00:02:23.729 [10/740] Generating lib/rte_rcu_mingw with a custom command 00:02:23.729 [11/740] Generating lib/rte_mbuf_def with a custom command 00:02:23.729 [12/740] Generating lib/rte_ring_mingw with a custom command 00:02:23.729 [13/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:23.729 [14/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:23.729 [15/740] Generating lib/rte_rcu_def with a custom command 00:02:23.729 [16/740] Generating lib/rte_mbuf_mingw with a custom command 00:02:23.729 [17/740] Generating lib/rte_net_mingw with a custom command 00:02:23.729 [18/740] Generating lib/rte_meter_def with a custom command 00:02:23.729 [19/740] Generating lib/rte_meter_mingw with a custom command 00:02:23.993 [20/740] Generating lib/rte_net_def with a custom command 00:02:23.993 [21/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.993 [22/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_log.c.o 00:02:23.993 [23/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:23.993 [24/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:23.993 [25/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.993 [26/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:23.993 [27/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.993 [28/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.993 [29/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.993 [30/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:23.993 [31/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:23.993 [32/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:23.993 [33/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:23.993 [34/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.993 [35/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:23.993 [36/740] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:23.993 [37/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.993 [38/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:23.993 [39/740] Generating lib/rte_ethdev_def with a custom command 00:02:23.993 [40/740] Generating lib/rte_pci_def with a custom command 00:02:23.993 [41/740] Generating lib/rte_ethdev_mingw with a custom command 00:02:23.993 [42/740] Linking static target lib/librte_kvargs.a 00:02:23.993 [43/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:23.993 [44/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:23.993 [45/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:23.993 [46/740] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:23.993 [47/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:23.993 [48/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:23.993 [49/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:23.993 [50/740] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:23.993 [51/740] Generating lib/rte_pci_mingw with a custom command 00:02:23.993 [52/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:23.993 [53/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:23.993 [54/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:23.993 [55/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:23.993 [56/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:23.993 [57/740] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:23.993 [58/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:23.993 [59/740] Generating lib/rte_cmdline_def with a custom command 00:02:23.993 [60/740] Generating lib/rte_cmdline_mingw with a custom command 00:02:23.993 [61/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:23.993 [62/740] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:23.993 [63/740] Generating lib/rte_metrics_def with a custom command 00:02:23.993 [64/740] Generating lib/rte_metrics_mingw with a custom command 00:02:23.993 [65/740] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:23.993 [66/740] Linking static target lib/librte_ring.a 00:02:23.993 [67/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:23.993 [68/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:23.993 [69/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:23.993 [70/740] Linking static target lib/librte_pci.a 00:02:23.993 [71/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:23.993 [72/740] Generating lib/rte_hash_def with a custom command 00:02:23.993 [73/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:23.993 [74/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:23.993 [75/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:23.993 [76/740] Generating lib/rte_hash_mingw with a custom command 00:02:23.993 [77/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_log.c.o 00:02:23.993 [78/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:23.993 [79/740] Generating lib/rte_timer_mingw with a custom command 00:02:24.253 [80/740] Generating lib/rte_timer_def with a custom command 00:02:24.253 [81/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:24.253 [82/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:24.253 [83/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:24.253 [84/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:24.254 [85/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.254 [86/740] Generating lib/rte_acl_def with a custom command 00:02:24.254 [87/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.254 [88/740] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.254 [89/740] Generating lib/rte_acl_mingw with a custom command 00:02:24.254 [90/740] Generating lib/rte_bbdev_mingw with a custom command 00:02:24.254 [91/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:24.254 [92/740] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:24.254 [93/740] Generating lib/rte_bbdev_def with a custom command 00:02:24.254 [94/740] Generating lib/rte_bitratestats_mingw with a custom command 00:02:24.254 [95/740] Linking static target lib/librte_meter.a 00:02:24.254 [96/740] Generating lib/rte_bitratestats_def with a custom command 00:02:24.254 [97/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.254 [98/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.254 [99/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.254 [100/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.254 [101/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:24.254 [102/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.254 [103/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:24.254 [104/740] Generating lib/rte_bpf_def with a custom command 00:02:24.254 [105/740] Generating lib/rte_bpf_mingw with a custom command 00:02:24.254 [106/740] Generating lib/rte_cfgfile_mingw with a custom command 00:02:24.254 [107/740] Generating lib/rte_cfgfile_def with a custom command 00:02:24.254 [108/740] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:24.254 [109/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:24.254 [110/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.254 [111/740] Generating lib/rte_compressdev_def with a custom command 00:02:24.254 [112/740] Generating lib/rte_compressdev_mingw with a custom command 00:02:24.254 [113/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:24.254 [114/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:24.254 [115/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:24.254 [116/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:24.254 [117/740] Generating lib/rte_cryptodev_def with a custom command 00:02:24.254 [118/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics.c.o 00:02:24.254 [119/740] Generating lib/rte_cryptodev_mingw with a custom command 00:02:24.254 [120/740] Generating lib/rte_distributor_mingw with a custom command 00:02:24.254 [121/740] Generating lib/rte_distributor_def with a custom command 00:02:24.254 [122/740] Generating lib/rte_efd_def with a custom command 00:02:24.254 [123/740] Generating lib/rte_efd_mingw with a custom command 00:02:24.254 [124/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:24.254 [125/740] Generating lib/rte_eventdev_def with a custom command 00:02:24.519 [126/740] Generating lib/rte_eventdev_mingw with a custom command 00:02:24.519 [127/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:24.519 [128/740] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.519 [129/740] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.519 [130/740] Generating lib/rte_gpudev_def with a custom command 00:02:24.519 [131/740] Generating lib/rte_gpudev_mingw with a custom command 00:02:24.519 [132/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:24.519 [133/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.519 [134/740] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:24.519 [135/740] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.519 [136/740] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:24.519 [137/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:24.519 [138/740] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.519 [139/740] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.519 [140/740] Linking target lib/librte_kvargs.so.23.0 00:02:24.519 [141/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.519 [142/740] Generating lib/rte_gro_def with a custom command 00:02:24.519 [143/740] Generating lib/rte_gro_mingw with a custom command 00:02:24.519 [144/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:24.519 [145/740] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.519 [146/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.519 [147/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.519 [148/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.519 [149/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:24.519 [150/740] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:24.519 [151/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:24.519 [152/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:24.519 [153/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.519 [154/740] Compiling C object lib/librte_cfgfile.a.p/cfgfile_rte_cfgfile.c.o 00:02:24.519 [155/740] Generating lib/rte_gso_def with a custom command 00:02:24.519 [156/740] Generating lib/rte_gso_mingw with a custom command 00:02:24.786 [157/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:24.786 [158/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.786 [159/740] Linking static target lib/librte_cfgfile.a 00:02:24.786 [160/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.786 [161/740] Generating lib/rte_ip_frag_def with a custom command 00:02:24.786 [162/740] Compiling C object lib/librte_acl.a.p/acl_tb_mem.c.o 00:02:24.786 [163/740] Generating lib/rte_ip_frag_mingw with a custom command 00:02:24.786 [164/740] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:24.786 [165/740] Generating lib/rte_jobstats_mingw with a custom command 00:02:24.786 [166/740] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:24.786 [167/740] Linking static target lib/librte_cmdline.a 00:02:24.786 [168/740] Generating lib/rte_jobstats_def with a custom command 00:02:24.786 [169/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:24.786 [170/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:24.786 [171/740] Generating lib/rte_latencystats_def with a custom command 00:02:24.786 [172/740] Generating lib/rte_latencystats_mingw with a custom command 00:02:24.786 [173/740] Generating lib/rte_lpm_def with a custom command 00:02:24.786 [174/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.786 [175/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:24.786 [176/740] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.786 [177/740] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.786 [178/740] Linking static target lib/librte_timer.a 00:02:24.786 [179/740] Generating lib/rte_lpm_mingw with a custom command 00:02:24.786 [180/740] Generating lib/rte_member_mingw with a custom command 00:02:24.786 [181/740] Generating lib/rte_member_def with a custom command 00:02:24.786 [182/740] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:24.786 [183/740] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:24.786 [184/740] Generating lib/rte_pcapng_mingw with a custom command 00:02:24.786 [185/740] Compiling C object lib/librte_acl.a.p/acl_rte_acl.c.o 00:02:24.786 [186/740] Generating lib/rte_pcapng_def with a custom command 00:02:24.786 [187/740] Compiling C object lib/librte_metrics.a.p/metrics_rte_metrics_telemetry.c.o 00:02:24.786 [188/740] Linking static target lib/librte_telemetry.a 00:02:24.786 [189/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.786 [190/740] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:24.786 [191/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_dump.c.o 00:02:24.786 [192/740] Linking static target lib/librte_net.a 00:02:24.786 [193/740] Linking static target lib/librte_metrics.a 00:02:24.786 [194/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load.c.o 00:02:24.786 [195/740] Compiling C object lib/librte_bitratestats.a.p/bitratestats_rte_bitrate.c.o 00:02:24.786 [196/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_stub.c.o 00:02:24.786 [197/740] Linking static target lib/librte_bitratestats.a 00:02:24.786 [198/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf.c.o 00:02:24.786 [199/740] Compiling C object lib/librte_jobstats.a.p/jobstats_rte_jobstats.c.o 00:02:24.786 [200/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.786 [201/740] Linking static target lib/librte_jobstats.a 00:02:24.786 [202/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_match_sse.c.o 00:02:24.786 [203/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.786 [204/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.786 [205/740] Generating lib/rte_power_def with a custom command 00:02:25.055 [206/740] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:25.055 [207/740] Generating lib/rte_power_mingw with a custom command 00:02:25.055 [208/740] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:25.055 [209/740] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:25.055 [210/740] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:25.055 [211/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:25.055 [212/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_load_elf.c.o 00:02:25.055 [213/740] Generating lib/rte_regexdev_mingw with a custom command 00:02:25.055 [214/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:25.055 [215/740] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:25.055 [216/740] Generating lib/rte_rawdev_mingw with a custom command 00:02:25.055 [217/740] Generating lib/rte_rawdev_def with a custom command 00:02:25.055 [218/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:25.055 [219/740] Generating lib/rte_regexdev_def with a custom command 00:02:25.055 [220/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:25.055 [221/740] Generating lib/rte_dmadev_def with a custom command 00:02:25.055 [222/740] Generating lib/rte_dmadev_mingw with a custom command 00:02:25.055 [223/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:25.055 [224/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_private.c.o 00:02:25.055 [225/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_ring.c.o 00:02:25.055 [226/740] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:25.055 [227/740] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:25.055 [228/740] Generating lib/rte_rib_def with a custom command 00:02:25.055 [229/740] Generating lib/rte_reorder_mingw with a custom command 00:02:25.055 [230/740] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:25.055 [231/740] Generating lib/rte_reorder_def with a custom command 00:02:25.055 [232/740] Generating lib/rte_rib_mingw with a custom command 00:02:25.055 [233/740] Compiling C object lib/librte_acl.a.p/acl_acl_gen.c.o 00:02:25.055 [234/740] Generating lib/rte_sched_def with a custom command 00:02:25.055 [235/740] Generating lib/rte_security_mingw with a custom command 00:02:25.055 [236/740] Generating lib/rte_security_def with a custom command 00:02:25.055 [237/740] Generating lib/rte_sched_mingw with a custom command 00:02:25.055 [238/740] Compiling C object lib/librte_eventdev.a.p/eventdev_eventdev_trace_points.c.o 00:02:25.055 [239/740] Generating lib/rte_stack_def with a custom command 00:02:25.055 [240/740] Compiling C object lib/librte_power.a.p/power_rte_power_empty_poll.c.o 00:02:25.055 [241/740] Generating lib/rte_stack_mingw with a custom command 00:02:25.055 [242/740] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:25.055 [243/740] Linking static target lib/librte_mempool.a 00:02:25.055 [244/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor_single.c.o 00:02:25.055 [245/740] Generating lib/bitratestats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.055 [246/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_exec.c.o 00:02:25.318 [247/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_scalar.c.o 00:02:25.318 [248/740] Compiling C object lib/librte_sched.a.p/sched_rte_pie.c.o 00:02:25.318 [249/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_convert.c.o 00:02:25.318 [250/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_std.c.o 00:02:25.318 [251/740] Generating lib/rte_vhost_def with a custom command 00:02:25.318 [252/740] Compiling C object lib/librte_sched.a.p/sched_rte_red.c.o 00:02:25.318 [253/740] Compiling C object lib/librte_sched.a.p/sched_rte_approx.c.o 00:02:25.318 [254/740] Compiling C object lib/librte_gso.a.p/gso_gso_udp4.c.o 00:02:25.318 [255/740] Generating lib/rte_vhost_mingw with a custom command 00:02:25.318 [256/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack_lf.c.o 00:02:25.318 [257/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:25.318 [258/740] Compiling C object lib/librte_gso.a.p/gso_gso_tcp4.c.o 00:02:25.319 [259/740] Compiling C object lib/librte_stack.a.p/stack_rte_stack.c.o 00:02:25.319 [260/740] Generating lib/cfgfile.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.319 [261/740] Linking static target lib/librte_stack.a 00:02:25.319 [262/740] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:25.319 [263/740] Generating lib/rte_ipsec_def with a custom command 00:02:25.319 [264/740] Generating lib/rte_ipsec_mingw with a custom command 00:02:25.319 [265/740] Linking static target lib/librte_compressdev.a 00:02:25.319 [266/740] Compiling C object lib/librte_gso.a.p/gso_rte_gso.c.o 00:02:25.319 [267/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_tcp4.c.o 00:02:25.319 [268/740] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.319 [269/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:25.319 [270/740] Generating lib/rte_fib_def with a custom command 00:02:25.319 [271/740] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:25.319 [272/740] Linking static target lib/librte_rcu.a 00:02:25.319 [273/740] Generating lib/rte_fib_mingw with a custom command 00:02:25.319 [274/740] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:25.319 [275/740] Compiling C object lib/librte_gso.a.p/gso_gso_tunnel_udp4.c.o 00:02:25.319 [276/740] Generating lib/jobstats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.319 [277/740] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:25.319 [278/740] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.319 [279/740] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.319 [280/740] Compiling C object lib/librte_gro.a.p/gro_gro_udp4.c.o 00:02:25.319 [281/740] Compiling C object lib/librte_rawdev.a.p/rawdev_rte_rawdev.c.o 00:02:25.586 [282/740] Linking static target lib/librte_rawdev.a 00:02:25.586 [283/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib.c.o 00:02:25.586 [284/740] Compiling C object lib/librte_gpudev.a.p/gpudev_gpudev.c.o 00:02:25.586 [285/740] Linking target lib/librte_telemetry.so.23.0 00:02:25.586 [286/740] Linking static target lib/librte_gpudev.a 00:02:25.586 [287/740] Compiling C object lib/librte_bbdev.a.p/bbdev_rte_bbdev.c.o 00:02:25.586 [288/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_pkt.c.o 00:02:25.586 [289/740] Generating lib/metrics.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.586 [290/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_reassembly.c.o 00:02:25.586 [291/740] Linking static target lib/librte_bbdev.a 00:02:25.586 [292/740] Generating lib/rte_port_def with a custom command 00:02:25.586 [293/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_tcp4.c.o 00:02:25.586 [294/740] Compiling C object lib/librte_gro.a.p/gro_gro_vxlan_udp4.c.o 00:02:25.586 [295/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_reassembly.c.o 00:02:25.586 [296/740] Generating lib/rte_pdump_mingw with a custom command 00:02:25.586 [297/740] Generating lib/rte_port_mingw with a custom command 00:02:25.586 [298/740] Compiling C object lib/librte_gro.a.p/gro_gro_tcp4.c.o 00:02:25.586 [299/740] Generating lib/rte_pdump_def with a custom command 00:02:25.586 [300/740] Compiling C object lib/librte_latencystats.a.p/latencystats_rte_latencystats.c.o 00:02:25.586 [301/740] Linking static target lib/librte_latencystats.a 00:02:25.586 [302/740] Compiling C object lib/librte_member.a.p/member_rte_member.c.o 00:02:25.586 [303/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ip_frag_common.c.o 00:02:25.586 [304/740] Compiling C object lib/librte_gro.a.p/gro_rte_gro.c.o 00:02:25.586 [305/740] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:25.586 [306/740] Compiling C object lib/member/libsketch_avx512_tmp.a.p/rte_member_sketch_avx512.c.o 00:02:25.586 [307/740] Linking static target lib/librte_dmadev.a 00:02:25.586 [308/740] Linking static target lib/librte_gro.a 00:02:25.586 [309/740] Linking static target lib/member/libsketch_avx512_tmp.a 00:02:25.586 [310/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv6_fragmentation.c.o 00:02:25.586 [311/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_ip_frag_internal.c.o 00:02:25.586 [312/740] Generating lib/stack.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.586 [313/740] Compiling C object lib/librte_gso.a.p/gso_gso_common.c.o 00:02:25.586 [314/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_validate.c.o 00:02:25.586 [315/740] Linking static target lib/librte_gso.a 00:02:25.586 [316/740] Compiling C object lib/librte_power.a.p/power_rte_power_intel_uncore.c.o 00:02:25.855 [317/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm.c.o 00:02:25.855 [318/740] Compiling C object lib/librte_distributor.a.p/distributor_rte_distributor.c.o 00:02:25.855 [319/740] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:25.855 [320/740] Linking static target lib/librte_distributor.a 00:02:25.855 [321/740] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:25.855 [322/740] Generating lib/rte_table_def with a custom command 00:02:25.855 [323/740] Generating lib/rte_table_mingw with a custom command 00:02:25.855 [324/740] Compiling C object lib/librte_table.a.p/table_rte_swx_keycmp.c.o 00:02:25.855 [325/740] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.855 [326/740] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:25.855 [327/740] Linking static target lib/librte_mbuf.a 00:02:25.855 [328/740] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:25.855 [329/740] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:25.855 [330/740] Compiling C object lib/librte_regexdev.a.p/regexdev_rte_regexdev.c.o 00:02:25.855 [331/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_telemetry.c.o 00:02:25.855 [332/740] Linking static target lib/librte_regexdev.a 00:02:25.855 [333/740] Linking static target lib/librte_power.a 00:02:26.127 [334/740] Compiling C object lib/librte_member.a.p/member_rte_member_vbf.c.o 00:02:26.127 [335/740] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:26.127 [336/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib.c.o 00:02:26.127 [337/740] Compiling C object lib/librte_fib.a.p/fib_rte_fib6.c.o 00:02:26.127 [338/740] Linking static target lib/librte_reorder.a 00:02:26.127 [339/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ses.c.o 00:02:26.127 [340/740] Compiling C object lib/librte_pcapng.a.p/pcapng_rte_pcapng.c.o 00:02:26.127 [341/740] Compiling C object lib/librte_ip_frag.a.p/ip_frag_rte_ipv4_fragmentation.c.o 00:02:26.127 [342/740] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:26.127 [343/740] Generating lib/latencystats.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.127 [344/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_learner.c.o 00:02:26.127 [345/740] Linking static target lib/librte_pcapng.a 00:02:26.127 [346/740] Generating lib/gro.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.127 [347/740] Generating lib/gso.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.127 [348/740] Generating lib/rte_pipeline_def with a custom command 00:02:26.127 [349/740] Generating lib/rte_pipeline_mingw with a custom command 00:02:26.127 [350/740] Linking static target lib/librte_ip_frag.a 00:02:26.127 [351/740] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:26.127 [352/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_em.c.o 00:02:26.127 [353/740] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:26.127 [354/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8_avx512.c.o 00:02:26.127 [355/740] Generating lib/rte_graph_def with a custom command 00:02:26.127 [356/740] Linking static target lib/librte_eal.a 00:02:26.127 [357/740] Generating lib/rte_graph_mingw with a custom command 00:02:26.127 [358/740] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:26.127 [359/740] Linking static target lib/librte_security.a 00:02:26.127 [360/740] Generating lib/distributor.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.127 [361/740] Compiling C object lib/librte_port.a.p/port_rte_port_sched.c.o 00:02:26.127 [362/740] Compiling C object lib/librte_node.a.p/node_null.c.o 00:02:26.127 [363/740] Compiling C object lib/librte_fib.a.p/fib_trie_avx512.c.o 00:02:26.127 [364/740] Generating lib/rawdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.397 [365/740] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.397 [366/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_crypto_adapter.c.o 00:02:26.397 [367/740] Generating lib/rte_node_def with a custom command 00:02:26.397 [368/740] Generating lib/rte_node_mingw with a custom command 00:02:26.397 [369/740] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:26.397 [370/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_timer_adapter.c.o 00:02:26.397 [371/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_wm.c.o 00:02:26.397 [372/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:26.397 [373/740] Compiling C object lib/librte_lpm.a.p/lpm_rte_lpm6.c.o 00:02:26.397 [374/740] Compiling C object lib/librte_ipsec.a.p/ipsec_sa.c.o 00:02:26.397 [375/740] Compiling C object lib/librte_table.a.p/table_rte_table_stub.c.o 00:02:26.397 [376/740] Compiling C object lib/librte_port.a.p/port_rte_port_frag.c.o 00:02:26.397 [377/740] Compiling C object lib/librte_table.a.p/table_rte_swx_table_selector.c.o 00:02:26.397 [378/740] Linking static target lib/librte_lpm.a 00:02:26.397 [379/740] Compiling C object lib/librte_table.a.p/table_rte_table_array.c.o 00:02:26.397 [380/740] Generating drivers/rte_bus_pci_def with a custom command 00:02:26.397 [381/740] Compiling C object lib/librte_bpf.a.p/bpf_bpf_jit_x86.c.o 00:02:26.397 [382/740] Generating drivers/rte_bus_pci_mingw with a custom command 00:02:26.397 [383/740] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.397 [384/740] Linking static target lib/librte_bpf.a 00:02:26.397 [385/740] Compiling C object lib/librte_rib.a.p/rib_rte_rib6.c.o 00:02:26.397 [386/740] Generating drivers/rte_bus_vdev_mingw with a custom command 00:02:26.397 [387/740] Generating lib/bbdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.397 [388/740] Compiling C object lib/librte_port.a.p/port_rte_port_ras.c.o 00:02:26.397 [389/740] Generating drivers/rte_bus_vdev_def with a custom command 00:02:26.397 [390/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_cuckoo.c.o 00:02:26.397 [391/740] Linking static target lib/librte_rib.a 00:02:26.397 [392/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:26.397 [393/740] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.397 [394/740] Generating drivers/rte_mempool_ring_def with a custom command 00:02:26.397 [395/740] Generating drivers/rte_mempool_ring_mingw with a custom command 00:02:26.397 [396/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm_ipv6.c.o 00:02:26.659 [397/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_tx_adapter.c.o 00:02:26.659 [398/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:26.659 [399/740] Generating lib/pcapng.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.659 [400/740] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.659 [401/740] Generating lib/ip_frag.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.659 [402/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:26.659 [403/740] Compiling C object lib/librte_table.a.p/table_rte_table_acl.c.o 00:02:26.659 [404/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_eventdev.c.o 00:02:26.659 [405/740] Compiling C object lib/librte_ipsec.a.p/ipsec_ipsec_sad.c.o 00:02:26.659 [406/740] Compiling C object lib/librte_table.a.p/table_rte_table_lpm.c.o 00:02:26.659 [407/740] Compiling C object lib/librte_port.a.p/port_rte_port_ethdev.c.o 00:02:26.659 [408/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_port_in_action.c.o 00:02:26.659 [409/740] Generating drivers/rte_net_i40e_def with a custom command 00:02:26.659 [410/740] Generating drivers/rte_net_i40e_mingw with a custom command 00:02:26.659 [411/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:26.659 [412/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_fd.c.o 00:02:26.659 [413/740] Compiling C object lib/librte_member.a.p/member_rte_member_ht.c.o 00:02:26.659 [414/740] Compiling C object lib/librte_port.a.p/port_rte_port_fd.c.o 00:02:26.659 [415/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ethdev.c.o 00:02:26.659 [416/740] Compiling C object lib/librte_graph.a.p/graph_graph_debug.c.o 00:02:26.659 [417/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:26.922 [418/740] Compiling C object lib/librte_port.a.p/port_rte_port_eventdev.c.o 00:02:26.922 [419/740] Compiling C object lib/librte_node.a.p/node_log.c.o 00:02:26.922 [420/740] Compiling C object lib/librte_port.a.p/port_rte_port_source_sink.c.o 00:02:26.922 [421/740] Compiling C object lib/librte_graph.a.p/graph_graph_ops.c.o 00:02:26.922 [422/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_commands.c.o 00:02:26.922 [423/740] Compiling C object app/dpdk-test-cmdline.p/test-cmdline_cmdline_test.c.o 00:02:26.922 [424/740] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.922 [425/740] Compiling C object lib/librte_acl.a.p/acl_acl_bld.c.o 00:02:26.922 [426/740] Compiling C object lib/librte_graph.a.p/graph_node.c.o 00:02:26.922 [427/740] Compiling C object lib/librte_graph.a.p/graph_graph_populate.c.o 00:02:26.922 [428/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_source_sink.c.o 00:02:26.922 [429/740] Compiling C object lib/librte_node.a.p/node_pkt_drop.c.o 00:02:26.922 [430/740] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:26.922 [431/740] Compiling C object lib/librte_port.a.p/port_rte_port_sym_crypto.c.o 00:02:26.922 [432/740] Compiling C object lib/librte_graph.a.p/graph_graph.c.o 00:02:26.922 [433/740] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:26.922 [434/740] Compiling C object lib/librte_fib.a.p/fib_trie.c.o 00:02:26.922 [435/740] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.922 [436/740] Compiling C object lib/librte_node.a.p/node_ethdev_ctrl.c.o 00:02:26.922 [437/740] Generating lib/gpudev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.922 [438/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_main.c.o 00:02:26.922 [439/740] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.922 [440/740] Compiling C object lib/librte_efd.a.p/efd_rte_efd.c.o 00:02:26.922 [441/740] Linking static target lib/librte_efd.a 00:02:26.922 [442/740] Compiling C object lib/librte_fib.a.p/fib_dir24_8.c.o 00:02:27.186 [443/740] Generating lib/lpm.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.186 [444/740] Linking static target lib/librte_fib.a 00:02:27.186 [445/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_test.c.o 00:02:27.186 [446/740] Compiling C object lib/librte_graph.a.p/graph_graph_stats.c.o 00:02:27.186 [447/740] Linking static target lib/librte_graph.a 00:02:27.186 [448/740] Generating lib/bpf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.186 [449/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:27.186 [450/740] Compiling C object lib/librte_node.a.p/node_ethdev_rx.c.o 00:02:27.186 [451/740] Compiling C object lib/librte_node.a.p/node_ethdev_tx.c.o 00:02:27.186 [452/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key8.c.o 00:02:27.186 [453/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_diag.c.o 00:02:27.186 [454/740] Compiling C object lib/librte_port.a.p/port_rte_swx_port_ring.c.o 00:02:27.186 [455/740] Generating lib/regexdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.186 [456/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_hmc.c.o 00:02:27.186 [457/740] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:27.186 [458/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_parser.c.o 00:02:27.186 [459/740] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:27.186 [460/740] Generating lib/rib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.186 [461/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:27.453 [462/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:27.453 [463/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_pipeline.c.o 00:02:27.453 [464/740] Compiling C object lib/librte_pdump.a.p/pdump_rte_pdump.c.o 00:02:27.453 [465/740] Linking static target lib/librte_pdump.a 00:02:27.453 [466/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_lru.c.o 00:02:27.453 [467/740] Generating lib/efd.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.453 [468/740] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.453 [469/740] Compiling C object lib/librte_node.a.p/node_pkt_cls.c.o 00:02:27.453 [470/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_vf_representor.c.o 00:02:27.720 [471/740] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:27.720 [472/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key32.c.o 00:02:27.720 [473/740] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.720 [474/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_key16.c.o 00:02:27.720 [475/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_dcb.c.o 00:02:27.720 [476/740] Linking static target drivers/librte_bus_vdev.a 00:02:27.720 [477/740] Compiling C object drivers/librte_bus_vdev.so.23.0.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:27.720 [478/740] Compiling C object lib/librte_table.a.p/table_rte_table_hash_ext.c.o 00:02:27.720 [479/740] Linking static target lib/librte_table.a 00:02:27.720 [480/740] Generating lib/fib.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.720 [481/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_options_parse.c.o 00:02:27.720 [482/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vectors.c.o 00:02:27.720 [483/740] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:27.720 [484/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_main.c.o 00:02:27.720 [485/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_common.c.o 00:02:27.720 [486/740] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.720 [487/740] Compiling C object drivers/librte_bus_pci.so.23.0.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:27.720 [488/740] Linking static target drivers/librte_bus_pci.a 00:02:27.997 [489/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_main.c.o 00:02:27.997 [490/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_tm.c.o 00:02:27.997 [491/740] Generating lib/pdump.sym_chk with a custom command (wrapped by meson to capture output) 00:02:27.997 [492/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_lan_hmc.c.o 00:02:27.997 [493/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_evt_options.c.o 00:02:27.997 [494/740] Generating symbol file lib/librte_telemetry.so.23.0.p/librte_telemetry.so.23.0.symbols 00:02:27.997 [495/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_hash.c.o 00:02:27.997 [496/740] Generating symbol file lib/librte_kvargs.so.23.0.p/librte_kvargs.so.23.0.symbols 00:02:27.997 [497/740] Compiling C object lib/librte_sched.a.p/sched_rte_sched.c.o 00:02:27.997 [498/740] Compiling C object app/dpdk-dumpcap.p/dumpcap_main.c.o 00:02:27.997 [499/740] Linking static target lib/librte_sched.a 00:02:27.997 [500/740] Compiling C object app/dpdk-test-acl.p/test-acl_main.c.o 00:02:27.997 [501/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_vector_parsing.c.o 00:02:27.997 [502/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_outb.c.o 00:02:27.997 [503/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_main.c.o 00:02:27.997 [504/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_items_gen.c.o 00:02:28.258 [505/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_adminq.c.o 00:02:28.258 [506/740] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:28.258 [507/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_main.c.o 00:02:28.258 [508/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_init.c.o 00:02:28.258 [509/740] Linking static target lib/librte_cryptodev.a 00:02:28.258 [510/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_flow_gen.c.o 00:02:28.258 [511/740] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:28.258 [512/740] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.258 [513/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_common.c.o 00:02:28.258 [514/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_atq.c.o 00:02:28.258 [515/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_config.c.o 00:02:28.258 [516/740] Linking static target lib/librte_ethdev.a 00:02:28.258 [517/740] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:28.258 [518/740] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:28.258 [519/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_acl.c.o 00:02:28.258 [520/740] Compiling C object lib/librte_node.a.p/node_ip4_rewrite.c.o 00:02:28.258 [521/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm.c.o 00:02:28.258 [522/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_stub.c.o 00:02:28.258 [523/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_options_parsing.c.o 00:02:28.258 [524/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_ops.c.o 00:02:28.258 [525/740] Compiling C object app/dpdk-test-gpudev.p/test-gpudev_main.c.o 00:02:28.258 [526/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_lpm_ipv6.c.o 00:02:28.258 [527/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_pmd_cyclecount.c.o 00:02:28.258 [528/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_order_queue.c.o 00:02:28.518 [529/740] Compiling C object lib/librte_node.a.p/node_ip4_lookup.c.o 00:02:28.518 [530/740] Linking static target lib/librte_node.a 00:02:28.518 [531/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_ctl.c.o 00:02:28.518 [532/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_actions_gen.c.o 00:02:28.518 [533/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_throughput.c.o 00:02:28.518 [534/740] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.519 [535/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev.c.o 00:02:28.519 [536/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_throughput.c.o 00:02:28.519 [537/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_verify.c.o 00:02:28.519 [538/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_verify.c.o 00:02:28.519 [539/740] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:28.519 [540/740] Generating lib/graph.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.519 [541/740] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.519 [542/740] Compiling C object drivers/librte_mempool_ring.so.23.0.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:28.519 [543/740] Linking static target drivers/librte_mempool_ring.a 00:02:28.519 [544/740] Compiling C object lib/librte_ipsec.a.p/ipsec_esp_inb.c.o 00:02:28.519 [545/740] Linking static target lib/librte_ipsec.a 00:02:28.519 [546/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmd_flex_item.c.o 00:02:28.519 [547/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_vector.c.o 00:02:28.519 [548/740] Compiling C object app/dpdk-pdump.p/pdump_main.c.o 00:02:28.779 [549/740] Compiling C object lib/librte_port.a.p/port_rte_port_ring.c.o 00:02:28.779 [550/740] Generating lib/sched.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.779 [551/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_nvm.c.o 00:02:28.779 [552/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_pipeline_hash.c.o 00:02:28.779 [553/740] Linking static target lib/librte_port.a 00:02:28.779 [554/740] Compiling C object app/dpdk-proc-info.p/proc-info_main.c.o 00:02:28.779 [555/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_common.c.o 00:02:28.779 [556/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_atq.c.o 00:02:28.779 [557/740] Compiling C object app/dpdk-test-compress-perf.p/test-compress-perf_comp_perf_test_cyclecount.c.o 00:02:28.779 [558/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_pf.c.o 00:02:28.779 [559/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_common.c.o 00:02:28.779 [560/740] Compiling C object app/dpdk-test-crypto-perf.p/test-crypto-perf_cperf_test_latency.c.o 00:02:28.779 [561/740] Generating lib/table.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.779 [562/740] Generating lib/node.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.779 [563/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_atq.c.o 00:02:28.779 [564/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_sse.c.o 00:02:28.779 [565/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_queue.c.o 00:02:28.779 [566/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_fdir.c.o 00:02:28.779 [567/740] Compiling C object app/dpdk-testpmd.p/test-pmd_iofwd.c.o 00:02:29.038 [568/740] Compiling C object app/dpdk-testpmd.p/test-pmd_ieee1588fwd.c.o 00:02:29.038 [569/740] Compiling C object lib/librte_member.a.p/member_rte_member_sketch.c.o 00:02:29.038 [570/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_flow.c.o 00:02:29.038 [571/740] Linking static target lib/librte_member.a 00:02:29.038 [572/740] Compiling C object app/dpdk-testpmd.p/test-pmd_icmpecho.c.o 00:02:29.038 [573/740] Compiling C object app/dpdk-test-security-perf.p/test-security-perf_test_security_perf.c.o 00:02:29.038 [574/740] Compiling C object app/dpdk-test-pipeline.p/test-pipeline_runtime.c.o 00:02:29.038 [575/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_rte_pmd_i40e.c.o 00:02:29.038 [576/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_pipeline_queue.c.o 00:02:29.038 [577/740] Compiling C object app/dpdk-testpmd.p/test-pmd_5tswap.c.o 00:02:29.038 [578/740] Compiling C object app/dpdk-testpmd.p/test-pmd_bpf_cmd.c.o 00:02:29.038 [579/740] Compiling C object app/dpdk-test-fib.p/test-fib_main.c.o 00:02:29.038 [580/740] Compiling C object app/dpdk-testpmd.p/test-pmd_rxonly.c.o 00:02:29.038 [581/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_tm.c.o 00:02:29.038 [582/740] Generating lib/ipsec.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.038 [583/740] Compiling C object app/dpdk-testpmd.p/test-pmd_shared_rxq_fwd.c.o 00:02:29.038 [584/740] Compiling C object lib/librte_eventdev.a.p/eventdev_rte_event_eth_rx_adapter.c.o 00:02:29.038 [585/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macfwd.c.o 00:02:29.038 [586/740] Linking static target lib/librte_eventdev.a 00:02:29.038 [587/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_mtr.c.o 00:02:29.038 [588/740] Compiling C object app/dpdk-testpmd.p/test-pmd_macswap.c.o 00:02:29.038 [589/740] Compiling C object app/dpdk-testpmd.p/test-pmd_flowgen.c.o 00:02:29.297 [590/740] Compiling C object app/dpdk-testpmd.p/test-pmd_parameters.c.o 00:02:29.297 [591/740] Compiling C object app/dpdk-test-sad.p/test-sad_main.c.o 00:02:29.297 [592/740] Compiling C object app/dpdk-testpmd.p/test-pmd_util.c.o 00:02:29.297 [593/740] Compiling C object app/dpdk-testpmd.p/.._drivers_net_i40e_i40e_testpmd.c.o 00:02:29.297 [594/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_avx2.c.o 00:02:29.297 [595/740] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:29.297 [596/740] Compiling C object app/dpdk-test-regex.p/test-regex_main.c.o 00:02:29.297 [597/740] Linking static target lib/librte_hash.a 00:02:29.297 [598/740] Compiling C object app/dpdk-test-flow-perf.p/test-flow-perf_main.c.o 00:02:29.297 [599/740] Generating lib/member.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.297 [600/740] Generating lib/port.sym_chk with a custom command (wrapped by meson to capture output) 00:02:29.555 [601/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx_vec_sse.c.o 00:02:29.555 [602/740] Compiling C object drivers/net/i40e/libi40e_avx512_lib.a.p/i40e_rxtx_vec_avx512.c.o 00:02:29.555 [603/740] Linking static target drivers/net/i40e/libi40e_avx512_lib.a 00:02:29.555 [604/740] Compiling C object app/dpdk-testpmd.p/test-pmd_txonly.c.o 00:02:29.813 [605/740] Compiling C object app/dpdk-testpmd.p/test-pmd_csumonly.c.o 00:02:29.813 [606/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx2.c.o 00:02:29.813 [607/740] Compiling C object app/dpdk-test-security-perf.p/test_test_cryptodev_security_ipsec.c.o 00:02:29.813 [608/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline_spec.c.o 00:02:29.813 [609/740] Compiling C object drivers/net/i40e/base/libi40e_base.a.p/i40e_common.c.o 00:02:29.813 [610/740] Compiling C object app/dpdk-test-eventdev.p/test-eventdev_test_perf_common.c.o 00:02:29.813 [611/740] Linking static target drivers/net/i40e/base/libi40e_base.a 00:02:29.813 [612/740] Compiling C object lib/librte_acl.a.p/acl_acl_run_avx512.c.o 00:02:29.813 [613/740] Linking static target lib/librte_acl.a 00:02:30.380 [614/740] Compiling C object app/dpdk-testpmd.p/test-pmd_testpmd.c.o 00:02:30.380 [615/740] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.380 [616/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline.c.o 00:02:30.380 [617/740] Generating lib/acl.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.380 [618/740] Compiling C object app/dpdk-testpmd.p/test-pmd_noisy_vnf.c.o 00:02:30.639 [619/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_rxtx.c.o 00:02:31.206 [620/740] Compiling C object app/dpdk-testpmd.p/test-pmd_cmdline_flow.c.o 00:02:31.206 [621/740] Compiling C object app/dpdk-testpmd.p/test-pmd_config.c.o 00:02:31.464 [622/740] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.723 [623/740] Compiling C object drivers/libtmp_rte_net_i40e.a.p/net_i40e_i40e_ethdev.c.o 00:02:31.723 [624/740] Linking static target drivers/libtmp_rte_net_i40e.a 00:02:31.981 [625/740] Generating lib/eventdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:32.240 [626/740] Generating drivers/rte_net_i40e.pmd.c with a custom command 00:02:32.240 [627/740] Compiling C object drivers/librte_net_i40e.a.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:32.240 [628/740] Compiling C object drivers/librte_net_i40e.so.23.0.p/meson-generated_.._rte_net_i40e.pmd.c.o 00:02:32.240 [629/740] Linking static target drivers/librte_net_i40e.a 00:02:32.809 [630/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_swx_pipeline.c.o 00:02:32.810 [631/740] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:33.068 [632/740] Compiling C object app/dpdk-test-bbdev.p/test-bbdev_test_bbdev_perf.c.o 00:02:33.068 [633/740] Generating drivers/rte_net_i40e.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.602 [634/740] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.139 [635/740] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.139 [636/740] Linking target lib/librte_eal.so.23.0 00:02:38.139 [637/740] Generating symbol file lib/librte_eal.so.23.0.p/librte_eal.so.23.0.symbols 00:02:38.139 [638/740] Linking target lib/librte_pci.so.23.0 00:02:38.139 [639/740] Linking target lib/librte_timer.so.23.0 00:02:38.139 [640/740] Linking target lib/librte_ring.so.23.0 00:02:38.139 [641/740] Linking target lib/librte_meter.so.23.0 00:02:38.139 [642/740] Linking target drivers/librte_bus_vdev.so.23.0 00:02:38.139 [643/740] Linking target lib/librte_jobstats.so.23.0 00:02:38.139 [644/740] Linking target lib/librte_cfgfile.so.23.0 00:02:38.139 [645/740] Linking target lib/librte_rawdev.so.23.0 00:02:38.139 [646/740] Linking target lib/librte_stack.so.23.0 00:02:38.139 [647/740] Linking target lib/librte_dmadev.so.23.0 00:02:38.139 [648/740] Linking target lib/librte_graph.so.23.0 00:02:38.139 [649/740] Linking target lib/librte_acl.so.23.0 00:02:38.139 [650/740] Generating symbol file drivers/librte_bus_vdev.so.23.0.p/librte_bus_vdev.so.23.0.symbols 00:02:38.139 [651/740] Generating symbol file lib/librte_timer.so.23.0.p/librte_timer.so.23.0.symbols 00:02:38.139 [652/740] Generating symbol file lib/librte_meter.so.23.0.p/librte_meter.so.23.0.symbols 00:02:38.139 [653/740] Generating symbol file lib/librte_pci.so.23.0.p/librte_pci.so.23.0.symbols 00:02:38.139 [654/740] Generating symbol file lib/librte_acl.so.23.0.p/librte_acl.so.23.0.symbols 00:02:38.139 [655/740] Generating symbol file lib/librte_graph.so.23.0.p/librte_graph.so.23.0.symbols 00:02:38.139 [656/740] Generating symbol file lib/librte_dmadev.so.23.0.p/librte_dmadev.so.23.0.symbols 00:02:38.139 [657/740] Generating symbol file lib/librte_ring.so.23.0.p/librte_ring.so.23.0.symbols 00:02:38.139 [658/740] Linking target drivers/librte_bus_pci.so.23.0 00:02:38.139 [659/740] Linking target lib/librte_mempool.so.23.0 00:02:38.139 [660/740] Linking target lib/librte_rcu.so.23.0 00:02:38.398 [661/740] Generating symbol file drivers/librte_bus_pci.so.23.0.p/librte_bus_pci.so.23.0.symbols 00:02:38.398 [662/740] Generating symbol file lib/librte_mempool.so.23.0.p/librte_mempool.so.23.0.symbols 00:02:38.398 [663/740] Generating symbol file lib/librte_rcu.so.23.0.p/librte_rcu.so.23.0.symbols 00:02:38.398 [664/740] Linking target drivers/librte_mempool_ring.so.23.0 00:02:38.398 [665/740] Linking target lib/librte_rib.so.23.0 00:02:38.398 [666/740] Linking target lib/librte_mbuf.so.23.0 00:02:38.398 [667/740] Generating symbol file lib/librte_mbuf.so.23.0.p/librte_mbuf.so.23.0.symbols 00:02:38.398 [668/740] Generating symbol file lib/librte_rib.so.23.0.p/librte_rib.so.23.0.symbols 00:02:38.398 [669/740] Linking target lib/librte_gpudev.so.23.0 00:02:38.398 [670/740] Linking target lib/librte_bbdev.so.23.0 00:02:38.398 [671/740] Linking target lib/librte_compressdev.so.23.0 00:02:38.398 [672/740] Linking target lib/librte_distributor.so.23.0 00:02:38.398 [673/740] Linking target lib/librte_reorder.so.23.0 00:02:38.398 [674/740] Linking target lib/librte_regexdev.so.23.0 00:02:38.398 [675/740] Linking target lib/librte_net.so.23.0 00:02:38.398 [676/740] Linking target lib/librte_sched.so.23.0 00:02:38.398 [677/740] Linking target lib/librte_cryptodev.so.23.0 00:02:38.658 [678/740] Linking target lib/librte_fib.so.23.0 00:02:38.658 [679/740] Generating symbol file lib/librte_net.so.23.0.p/librte_net.so.23.0.symbols 00:02:38.658 [680/740] Generating symbol file lib/librte_sched.so.23.0.p/librte_sched.so.23.0.symbols 00:02:38.658 [681/740] Generating symbol file lib/librte_cryptodev.so.23.0.p/librte_cryptodev.so.23.0.symbols 00:02:38.658 [682/740] Linking target lib/librte_cmdline.so.23.0 00:02:38.658 [683/740] Linking target lib/librte_hash.so.23.0 00:02:38.658 [684/740] Linking target lib/librte_ethdev.so.23.0 00:02:38.658 [685/740] Linking target lib/librte_security.so.23.0 00:02:38.917 [686/740] Generating symbol file lib/librte_hash.so.23.0.p/librte_hash.so.23.0.symbols 00:02:38.917 [687/740] Generating symbol file lib/librte_security.so.23.0.p/librte_security.so.23.0.symbols 00:02:38.917 [688/740] Generating symbol file lib/librte_ethdev.so.23.0.p/librte_ethdev.so.23.0.symbols 00:02:38.917 [689/740] Linking target lib/librte_efd.so.23.0 00:02:38.917 [690/740] Linking target lib/librte_lpm.so.23.0 00:02:38.917 [691/740] Linking target lib/librte_member.so.23.0 00:02:38.917 [692/740] Linking target lib/librte_metrics.so.23.0 00:02:38.917 [693/740] Linking target lib/librte_ip_frag.so.23.0 00:02:38.917 [694/740] Linking target lib/librte_ipsec.so.23.0 00:02:38.917 [695/740] Linking target lib/librte_pcapng.so.23.0 00:02:38.917 [696/740] Linking target lib/librte_gso.so.23.0 00:02:38.917 [697/740] Linking target lib/librte_gro.so.23.0 00:02:38.917 [698/740] Linking target lib/librte_power.so.23.0 00:02:38.917 [699/740] Linking target lib/librte_eventdev.so.23.0 00:02:38.917 [700/740] Linking target lib/librte_bpf.so.23.0 00:02:38.917 [701/740] Linking target drivers/librte_net_i40e.so.23.0 00:02:38.917 [702/740] Generating symbol file lib/librte_lpm.so.23.0.p/librte_lpm.so.23.0.symbols 00:02:38.917 [703/740] Generating symbol file lib/librte_ip_frag.so.23.0.p/librte_ip_frag.so.23.0.symbols 00:02:38.917 [704/740] Generating symbol file lib/librte_metrics.so.23.0.p/librte_metrics.so.23.0.symbols 00:02:38.917 [705/740] Generating symbol file lib/librte_pcapng.so.23.0.p/librte_pcapng.so.23.0.symbols 00:02:39.177 [706/740] Generating symbol file lib/librte_eventdev.so.23.0.p/librte_eventdev.so.23.0.symbols 00:02:39.177 [707/740] Generating symbol file lib/librte_bpf.so.23.0.p/librte_bpf.so.23.0.symbols 00:02:39.177 [708/740] Linking target lib/librte_node.so.23.0 00:02:39.177 [709/740] Linking target lib/librte_latencystats.so.23.0 00:02:39.177 [710/740] Linking target lib/librte_bitratestats.so.23.0 00:02:39.177 [711/740] Linking target lib/librte_pdump.so.23.0 00:02:39.177 [712/740] Linking target lib/librte_port.so.23.0 00:02:39.177 [713/740] Generating symbol file lib/librte_port.so.23.0.p/librte_port.so.23.0.symbols 00:02:39.177 [714/740] Linking target lib/librte_table.so.23.0 00:02:39.436 [715/740] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:39.436 [716/740] Linking static target lib/librte_vhost.a 00:02:39.436 [717/740] Generating symbol file lib/librte_table.so.23.0.p/librte_table.so.23.0.symbols 00:02:39.436 [718/740] Compiling C object lib/librte_pipeline.a.p/pipeline_rte_table_action.c.o 00:02:39.436 [719/740] Linking static target lib/librte_pipeline.a 00:02:40.005 [720/740] Linking target app/dpdk-test-regex 00:02:40.005 [721/740] Linking target app/dpdk-dumpcap 00:02:40.005 [722/740] Linking target app/dpdk-test-sad 00:02:40.005 [723/740] Linking target app/dpdk-test-gpudev 00:02:40.005 [724/740] Linking target app/dpdk-test-fib 00:02:40.005 [725/740] Linking target app/dpdk-test-cmdline 00:02:40.005 [726/740] Linking target app/dpdk-test-flow-perf 00:02:40.005 [727/740] Linking target app/dpdk-test-compress-perf 00:02:40.005 [728/740] Linking target app/dpdk-test-bbdev 00:02:40.005 [729/740] Linking target app/dpdk-test-acl 00:02:40.005 [730/740] Linking target app/dpdk-proc-info 00:02:40.005 [731/740] Linking target app/dpdk-pdump 00:02:40.005 [732/740] Linking target app/dpdk-test-pipeline 00:02:40.005 [733/740] Linking target app/dpdk-test-eventdev 00:02:40.005 [734/740] Linking target app/dpdk-test-crypto-perf 00:02:40.005 [735/740] Linking target app/dpdk-testpmd 00:02:40.005 [736/740] Linking target app/dpdk-test-security-perf 00:02:41.389 [737/740] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:41.389 [738/740] Linking target lib/librte_vhost.so.23.0 00:02:43.942 [739/740] Generating lib/pipeline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:44.202 [740/740] Linking target lib/librte_pipeline.so.23.0 00:02:44.202 05:02:57 build_native_dpdk -- common/autobuild_common.sh@201 -- $ uname -s 00:02:44.202 05:02:57 build_native_dpdk -- common/autobuild_common.sh@201 -- $ [[ Linux == \F\r\e\e\B\S\D ]] 00:02:44.202 05:02:57 build_native_dpdk -- common/autobuild_common.sh@214 -- $ ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp -j96 install 00:02:44.202 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp' 00:02:44.202 [0/1] Installing files. 00:02:44.467 Installing subdir /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/packet_ordering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/packet_ordering 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/basicfwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/skeleton/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/skeleton 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/helloworld/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/helloworld 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/obj.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/hash_func.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/varbit.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/ethdev.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/learner.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/mirroring.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/recirculation.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/meter.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_macswp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/vxlan_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_routing_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/registers.spec to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/l2fwd_pcap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/fib_nexthop_group_table.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/pcap.io to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/packet.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/pipeline/examples/selector.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/pipeline/examples 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ecdsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_xts.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_cmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_tdes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_hmac.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_ccm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.467 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_aes.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_rsa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_sha.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_validation_gcm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/fips_dev_self_test.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/fips_validation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/fips_validation 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-jobstats/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-jobstats 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/flow_classify.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/ipv4_rules_file.txt to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_classify/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_classify 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/service_cores/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/service_cores 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/virtio_net.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/pkt_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/sse/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/sse 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/altivec/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/altivec 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/common/neon/port_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/common/neon 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/flow_blocks.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/flow_filtering/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/flow_filtering 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_fragmentation/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_fragmentation 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_crypto 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_nop.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_monitor.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/channel_manager.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/power_manager.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/oob_monitor_x86.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/vm_power_cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/vm_power_cli_guest.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vm_power_manager/guest_cli/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vm_power_manager/guest_cli 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/perf_core.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-power/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-power 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/rxtx_callbacks/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/rxtx_callbacks 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bond/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bond 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/link_status_interrupt/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/link_status_interrupt 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bbdev_app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bbdev_app 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:44.468 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_poll.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-event/l2fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-event 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t2.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/README to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t1.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/dummy.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/bpf/t3.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/bpf 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/vhost_blk_compat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/blk_spec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vhost_blk/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vhost_blk 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/node.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/node/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/node 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/server 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/server_node_efd/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/server_node_efd/shared 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/ptpclient.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ptpclient/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ptpclient 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_reassembly/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_reassembly 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/parse_obj_list.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/cmdline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/cmdline 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/vdpa_blk_compact.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vdpa/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vdpa 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/dmafwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/dma/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/dma 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/rte_policer.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_meter/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_meter 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/cat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-cat/l2fwd-cat.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-cat 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/timer/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/timer 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/stats.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_red.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_pie.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/profile_ov.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cmdline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/cfg_file.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/qos_sched/app_thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/qos_sched 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipv4_multicast/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipv4_multicast 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.469 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_route.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_acl_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_sequential.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/lpm_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v4.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_fib.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_em_hlm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_route_parse.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/em_default_v6.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_internal_port.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_event_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/l3fwd_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sa.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_worker.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_process.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp4.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep1.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/rt.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/event_helper.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ep0.cfg to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/ipsec-secgw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/sp6.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/run_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/bypass_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aescbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs_secgw.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/linux_test.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/data_rxtx.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.470 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_null_header_reconstruct.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_ipv6opts.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/pkttest.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/load_env.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_3descbc_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/tun_aescbc_sha1_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesctr_sha1_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ipsec-secgw/test/trs_aesgcm_common_defs.sh to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ipsec-secgw/test 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/shm.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-keepalive/ka-agent/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-keepalive/ka-agent 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/conn.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cryptodev.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/parser.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/swq.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tmgr.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/tap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/link.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/pipeline.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/kni.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/cli.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/thread.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/kni.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/firewall.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route_ecmp.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/route.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/l2fwd.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/flow_crypto.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/tap.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ip_pipeline/examples/rss.cli to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ip_pipeline/examples 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/vmdq_dcb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/vmdq_dcb 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/ntb_fwd.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ntb/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ntb 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:44.471 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/ethapp.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/ethtool-app/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/ethtool-app 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/rte_ethtool.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/ethtool/lib/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/ethtool/lib 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/mp_commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/simple_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/simple_mp 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/commands.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/hotplug_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/hotplug_mp 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/client.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_client/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_client 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/init.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/args.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/mp_server/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/mp_server 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/client_server_mp/shared/common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/client_server_mp/shared 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/multi_process/symmetric_mp/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/multi_process/symmetric_mp 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_generic.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/pipeline_worker_tx.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/eventdev_pipeline/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/eventdev_pipeline 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/distributor/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/distributor 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l3fwd-graph/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l3fwd-graph 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/main.c to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:44.472 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/examples/l2fwd-crypto/Makefile to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/share/dpdk/examples/l2fwd-crypto 00:02:44.472 Installing lib/librte_kvargs.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_telemetry.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_eal.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_rcu.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_mempool.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_mbuf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_net.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_meter.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_ethdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_cmdline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_metrics.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_hash.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_timer.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_acl.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_bbdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_bitratestats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_bpf.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_cfgfile.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_compressdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_cryptodev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_distributor.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_efd.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_eventdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_gpudev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.472 Installing lib/librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_gro.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_gso.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_ip_frag.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_jobstats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_latencystats.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_lpm.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_member.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_pcapng.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_power.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_rawdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_regexdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_dmadev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_rib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_reorder.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_sched.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_security.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_stack.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_vhost.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_ipsec.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_fib.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_port.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_pdump.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_table.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_pipeline.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.473 Installing lib/librte_graph.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.737 Installing lib/librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.737 Installing lib/librte_node.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.737 Installing lib/librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.737 Installing drivers/librte_bus_pci.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.737 Installing drivers/librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:44.737 Installing drivers/librte_bus_vdev.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.737 Installing drivers/librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:44.737 Installing drivers/librte_mempool_ring.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.737 Installing drivers/librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:44.737 Installing drivers/librte_net_i40e.a to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:44.737 Installing drivers/librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0 00:02:44.737 Installing app/dpdk-dumpcap to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-pdump to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-proc-info to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-test-acl to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-test-bbdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-test-cmdline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-test-compress-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-test-crypto-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-test-eventdev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-test-fib to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-test-flow-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-test-gpudev to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-test-pipeline to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-testpmd to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-test-regex to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-test-sad to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing app/dpdk-test-security-perf to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/rte_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/kvargs/rte_kvargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/telemetry/rte_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/generic/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include/generic 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cpuflags.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_cycles.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_io.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.737 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_memcpy.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_pause.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_power_intrinsics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_prefetch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rtm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_rwlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_spinlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_vect.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_atomic_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_32.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/x86/include/rte_byteorder_64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_alarm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitmap.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bitops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_branch_prediction.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_bus.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_class.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_compat.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_debug.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_dev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_devargs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_memconfig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_eal_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_errno.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_epoll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_fbarray.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hexdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_hypervisor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_interrupts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_keepalive.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_launch.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_log.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_malloc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_mcslock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memory.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_memzone.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_feature_defs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pci_dev_features.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_per_lcore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_pflock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_random.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_reciprocal.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqcount.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_seqlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_service_component.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_string_fns.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_tailq.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_thread.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_ticketlock.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_time.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_trace_point_register.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_uuid.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_version.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/include/rte_vfio.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eal/linux/include/rte_os.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_c11_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_generic_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_hts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_peek_zc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ring/rte_ring_rts_elem_pvt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rcu/rte_rcu_qsbr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mempool/rte_mempool_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.738 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_ptype.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_pool_ops.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/mbuf/rte_mbuf_dyn.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ip.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_tcp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_udp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_esp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_sctp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_icmp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_arp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ether.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_macsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_vxlan.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gre.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_gtp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_net_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_mpls.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_higig.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ecpri.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_geneve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_l2tpv2.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/net/rte_ppp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/meter/rte_meter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_cman.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_dev_info.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_flow_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_mtr_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_tm_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_ethdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ethdev/rte_eth_ctrl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pci/rte_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_num.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_ipaddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_etheraddr.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_string.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_rdline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_vt100.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_socket.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_cirbuf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cmdline/cmdline_parse_portlist.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/metrics/rte_metrics_telemetry.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_fbk_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash_crc.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_jhash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_sw.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_crc_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/hash/rte_thash_x86_gfni.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/timer/rte_timer.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/acl/rte_acl_osdep.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bbdev/rte_bbdev_op.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bitratestats/rte_bitrate.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/bpf_def.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/bpf/rte_bpf_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cfgfile/rte_cfgfile.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_compressdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/compressdev/rte_comp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.739 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_sym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_crypto_asym.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/cryptodev/rte_cryptodev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/distributor/rte_distributor.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/efd/rte_efd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_crypto_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_rx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_eth_tx_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_event_timer_adapter.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_trace_fp.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/eventdev/rte_eventdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gpudev/rte_gpudev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gro/rte_gro.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/gso/rte_gso.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ip_frag/rte_ip_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/jobstats/rte_jobstats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/latencystats/rte_latencystats.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_altivec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_neon.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_scalar.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sse.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/lpm/rte_lpm_sve.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/member/rte_member.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pcapng/rte_pcapng.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_empty_poll.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_intel_uncore.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_pmd_mgmt.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/power/rte_power_guest_channel.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rawdev/rte_rawdev_pmd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/regexdev/rte_regexdev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/dmadev/rte_dmadev_core.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/rib/rte_rib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/reorder/rte_reorder.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_approx.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_red.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_sched_common.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/sched/rte_pie.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/security/rte_security_driver.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_std.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_generic.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_c11.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/stack/rte_stack_lf_stubs.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vdpa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_async.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/vhost/rte_vhost_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sa.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_sad.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/ipsec/rte_ipsec_group.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/fib/rte_fib6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_frag.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ras.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sched.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_sym_crypto.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_port_eventdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ethdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_fd.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_ring.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.740 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/port/rte_swx_port_source_sink.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pdump/rte_pdump.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_em.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_learner.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_selector.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_swx_table_wm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_acl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_array.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_cuckoo.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_lpm_ipv6.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_stub.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_lru_x86.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/table/rte_table_hash_func_arm64.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_port_in_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_table_action.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_pipeline.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_extern.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/pipeline/rte_swx_ctl.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/graph/rte_graph_worker.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_ip4_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/lib/node/rte_node_eth_api.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/pci/rte_bus_pci.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/bus/vdev/rte_bus_vdev.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/drivers/net/i40e/rte_pmd_i40e.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-devbind.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-pmdinfo.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-telemetry.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/usertools/dpdk-hugepages.py to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/bin 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/rte_build_config.h to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk-libs.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:44.741 Installing /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build-tmp/meson-private/libdpdk.pc to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig 00:02:44.741 Installing symlink pointing to librte_kvargs.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so.23 00:02:44.741 Installing symlink pointing to librte_kvargs.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_kvargs.so 00:02:44.741 Installing symlink pointing to librte_telemetry.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so.23 00:02:44.741 Installing symlink pointing to librte_telemetry.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_telemetry.so 00:02:44.741 Installing symlink pointing to librte_eal.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so.23 00:02:44.741 Installing symlink pointing to librte_eal.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eal.so 00:02:44.741 Installing symlink pointing to librte_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so.23 00:02:44.741 Installing symlink pointing to librte_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ring.so 00:02:44.741 Installing symlink pointing to librte_rcu.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so.23 00:02:44.741 Installing symlink pointing to librte_rcu.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rcu.so 00:02:44.741 Installing symlink pointing to librte_mempool.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so.23 00:02:44.741 Installing symlink pointing to librte_mempool.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mempool.so 00:02:44.741 Installing symlink pointing to librte_mbuf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so.23 00:02:44.741 Installing symlink pointing to librte_mbuf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_mbuf.so 00:02:44.741 Installing symlink pointing to librte_net.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so.23 00:02:44.741 Installing symlink pointing to librte_net.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_net.so 00:02:44.741 Installing symlink pointing to librte_meter.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so.23 00:02:44.741 Installing symlink pointing to librte_meter.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_meter.so 00:02:44.741 Installing symlink pointing to librte_ethdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so.23 00:02:44.741 Installing symlink pointing to librte_ethdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ethdev.so 00:02:44.741 Installing symlink pointing to librte_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so.23 00:02:44.741 Installing symlink pointing to librte_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pci.so 00:02:44.741 Installing symlink pointing to librte_cmdline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so.23 00:02:44.741 Installing symlink pointing to librte_cmdline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cmdline.so 00:02:44.741 Installing symlink pointing to librte_metrics.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so.23 00:02:44.741 Installing symlink pointing to librte_metrics.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_metrics.so 00:02:44.741 Installing symlink pointing to librte_hash.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so.23 00:02:44.741 Installing symlink pointing to librte_hash.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_hash.so 00:02:44.741 Installing symlink pointing to librte_timer.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so.23 00:02:44.741 Installing symlink pointing to librte_timer.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_timer.so 00:02:44.741 Installing symlink pointing to librte_acl.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so.23 00:02:44.741 Installing symlink pointing to librte_acl.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_acl.so 00:02:44.741 Installing symlink pointing to librte_bbdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so.23 00:02:44.741 Installing symlink pointing to librte_bbdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bbdev.so 00:02:44.741 Installing symlink pointing to librte_bitratestats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so.23 00:02:44.741 Installing symlink pointing to librte_bitratestats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bitratestats.so 00:02:44.741 Installing symlink pointing to librte_bpf.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so.23 00:02:44.741 Installing symlink pointing to librte_bpf.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_bpf.so 00:02:44.741 Installing symlink pointing to librte_cfgfile.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so.23 00:02:44.741 Installing symlink pointing to librte_cfgfile.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cfgfile.so 00:02:44.741 Installing symlink pointing to librte_compressdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so.23 00:02:44.741 Installing symlink pointing to librte_compressdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_compressdev.so 00:02:44.741 Installing symlink pointing to librte_cryptodev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so.23 00:02:44.742 Installing symlink pointing to librte_cryptodev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_cryptodev.so 00:02:44.742 Installing symlink pointing to librte_distributor.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so.23 00:02:44.742 Installing symlink pointing to librte_distributor.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_distributor.so 00:02:44.742 Installing symlink pointing to librte_efd.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so.23 00:02:44.742 Installing symlink pointing to librte_efd.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_efd.so 00:02:44.742 Installing symlink pointing to librte_eventdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so.23 00:02:44.742 Installing symlink pointing to librte_eventdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_eventdev.so 00:02:44.742 Installing symlink pointing to librte_gpudev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so.23 00:02:44.742 Installing symlink pointing to librte_gpudev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gpudev.so 00:02:44.742 Installing symlink pointing to librte_gro.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so.23 00:02:44.742 Installing symlink pointing to librte_gro.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gro.so 00:02:44.742 Installing symlink pointing to librte_gso.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so.23 00:02:44.742 Installing symlink pointing to librte_gso.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_gso.so 00:02:44.742 Installing symlink pointing to librte_ip_frag.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so.23 00:02:44.742 Installing symlink pointing to librte_ip_frag.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ip_frag.so 00:02:44.742 Installing symlink pointing to librte_jobstats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so.23 00:02:44.742 Installing symlink pointing to librte_jobstats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_jobstats.so 00:02:44.742 Installing symlink pointing to librte_latencystats.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so.23 00:02:44.742 Installing symlink pointing to librte_latencystats.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_latencystats.so 00:02:44.742 Installing symlink pointing to librte_lpm.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so.23 00:02:44.742 Installing symlink pointing to librte_lpm.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_lpm.so 00:02:44.742 Installing symlink pointing to librte_member.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so.23 00:02:44.742 Installing symlink pointing to librte_member.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_member.so 00:02:44.742 Installing symlink pointing to librte_pcapng.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so.23 00:02:44.742 Installing symlink pointing to librte_pcapng.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pcapng.so 00:02:44.742 Installing symlink pointing to librte_power.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so.23 00:02:44.742 Installing symlink pointing to librte_power.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_power.so 00:02:44.742 Installing symlink pointing to librte_rawdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so.23 00:02:44.742 Installing symlink pointing to librte_rawdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rawdev.so 00:02:44.742 Installing symlink pointing to librte_regexdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so.23 00:02:44.742 Installing symlink pointing to librte_regexdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_regexdev.so 00:02:44.742 Installing symlink pointing to librte_dmadev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so.23 00:02:44.742 Installing symlink pointing to librte_dmadev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_dmadev.so 00:02:44.742 Installing symlink pointing to librte_rib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so.23 00:02:44.742 Installing symlink pointing to librte_rib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_rib.so 00:02:44.742 Installing symlink pointing to librte_reorder.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so.23 00:02:44.742 Installing symlink pointing to librte_reorder.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_reorder.so 00:02:44.742 Installing symlink pointing to librte_sched.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so.23 00:02:44.742 Installing symlink pointing to librte_sched.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_sched.so 00:02:44.742 Installing symlink pointing to librte_security.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so.23 00:02:44.742 Installing symlink pointing to librte_security.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_security.so 00:02:44.742 Installing symlink pointing to librte_stack.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so.23 00:02:44.742 Installing symlink pointing to librte_stack.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_stack.so 00:02:44.742 Installing symlink pointing to librte_vhost.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so.23 00:02:44.742 Installing symlink pointing to librte_vhost.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_vhost.so 00:02:44.742 Installing symlink pointing to librte_ipsec.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so.23 00:02:44.742 Installing symlink pointing to librte_ipsec.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_ipsec.so 00:02:44.742 Installing symlink pointing to librte_fib.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so.23 00:02:44.742 Installing symlink pointing to librte_fib.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_fib.so 00:02:44.742 Installing symlink pointing to librte_port.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so.23 00:02:44.742 Installing symlink pointing to librte_port.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_port.so 00:02:44.742 Installing symlink pointing to librte_pdump.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so.23 00:02:44.742 Installing symlink pointing to librte_pdump.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pdump.so 00:02:44.742 Installing symlink pointing to librte_table.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so.23 00:02:44.742 Installing symlink pointing to librte_table.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_table.so 00:02:44.742 Installing symlink pointing to librte_pipeline.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so.23 00:02:44.742 Installing symlink pointing to librte_pipeline.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_pipeline.so 00:02:44.742 Installing symlink pointing to librte_graph.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so.23 00:02:44.742 Installing symlink pointing to librte_graph.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_graph.so 00:02:44.742 Installing symlink pointing to librte_node.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so.23 00:02:44.742 Installing symlink pointing to librte_node.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/librte_node.so 00:02:44.742 Installing symlink pointing to librte_bus_pci.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23 00:02:44.742 Installing symlink pointing to librte_bus_pci.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:02:44.742 Installing symlink pointing to librte_bus_vdev.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23 00:02:44.742 Installing symlink pointing to librte_bus_vdev.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:02:44.742 Installing symlink pointing to librte_mempool_ring.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23 00:02:44.742 Installing symlink pointing to librte_mempool_ring.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:02:44.742 Installing symlink pointing to librte_net_i40e.so.23.0 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23 00:02:44.742 './librte_bus_pci.so' -> 'dpdk/pmds-23.0/librte_bus_pci.so' 00:02:44.742 './librte_bus_pci.so.23' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23' 00:02:44.742 './librte_bus_pci.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_pci.so.23.0' 00:02:44.742 './librte_bus_vdev.so' -> 'dpdk/pmds-23.0/librte_bus_vdev.so' 00:02:44.742 './librte_bus_vdev.so.23' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23' 00:02:44.742 './librte_bus_vdev.so.23.0' -> 'dpdk/pmds-23.0/librte_bus_vdev.so.23.0' 00:02:44.742 './librte_mempool_ring.so' -> 'dpdk/pmds-23.0/librte_mempool_ring.so' 00:02:44.742 './librte_mempool_ring.so.23' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23' 00:02:44.743 './librte_mempool_ring.so.23.0' -> 'dpdk/pmds-23.0/librte_mempool_ring.so.23.0' 00:02:44.743 './librte_net_i40e.so' -> 'dpdk/pmds-23.0/librte_net_i40e.so' 00:02:44.743 './librte_net_i40e.so.23' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23' 00:02:44.743 './librte_net_i40e.so.23.0' -> 'dpdk/pmds-23.0/librte_net_i40e.so.23.0' 00:02:44.743 Installing symlink pointing to librte_net_i40e.so.23 to /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:02:44.743 Running custom install script '/bin/sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/config/../buildtools/symlink-drivers-solibs.sh lib dpdk/pmds-23.0' 00:02:44.743 05:02:58 build_native_dpdk -- common/autobuild_common.sh@220 -- $ cat 00:02:44.743 05:02:58 build_native_dpdk -- common/autobuild_common.sh@225 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:44.743 00:02:44.743 real 0m28.274s 00:02:44.743 user 7m45.033s 00:02:44.743 sys 1m59.572s 00:02:44.743 05:02:58 build_native_dpdk -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:44.743 05:02:58 build_native_dpdk -- common/autotest_common.sh@10 -- $ set +x 00:02:44.743 ************************************ 00:02:44.743 END TEST build_native_dpdk 00:02:44.743 ************************************ 00:02:44.743 05:02:58 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:44.743 05:02:58 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:44.743 05:02:58 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:44.743 05:02:58 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:44.743 05:02:58 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:02:44.743 05:02:58 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:02:44.743 05:02:58 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:02:44.743 05:02:58 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-dpdk=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build --with-shared 00:02:45.003 Using /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/pkgconfig for additional libs... 00:02:45.263 DPDK libraries: /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:02:45.263 DPDK includes: //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:02:45.263 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:02:45.523 Using 'verbs' RDMA provider 00:02:58.689 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:03:10.907 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:03:11.167 Creating mk/config.mk...done. 00:03:11.167 Creating mk/cc.flags.mk...done. 00:03:11.167 Type 'make' to build. 00:03:11.167 05:03:24 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:03:11.167 05:03:24 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:03:11.167 05:03:24 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:03:11.167 05:03:24 -- common/autotest_common.sh@10 -- $ set +x 00:03:11.167 ************************************ 00:03:11.167 START TEST make 00:03:11.167 ************************************ 00:03:11.167 05:03:24 make -- common/autotest_common.sh@1129 -- $ make -j96 00:03:13.084 The Meson build system 00:03:13.085 Version: 1.5.0 00:03:13.085 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:03:13.085 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:13.085 Build type: native build 00:03:13.085 Project name: libvfio-user 00:03:13.085 Project version: 0.0.1 00:03:13.085 C compiler for the host machine: gcc (gcc 13.3.1 "gcc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:03:13.085 C linker for the host machine: gcc ld.bfd 2.40-14 00:03:13.085 Host machine cpu family: x86_64 00:03:13.085 Host machine cpu: x86_64 00:03:13.085 Run-time dependency threads found: YES 00:03:13.085 Library dl found: YES 00:03:13.085 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:03:13.085 Run-time dependency json-c found: YES 0.17 00:03:13.085 Run-time dependency cmocka found: YES 1.1.7 00:03:13.085 Program pytest-3 found: NO 00:03:13.085 Program flake8 found: NO 00:03:13.085 Program misspell-fixer found: NO 00:03:13.085 Program restructuredtext-lint found: NO 00:03:13.085 Program valgrind found: YES (/usr/bin/valgrind) 00:03:13.085 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:13.085 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:13.085 Compiler for C supports arguments -Wwrite-strings: YES 00:03:13.085 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:13.085 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:03:13.085 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:03:13.085 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:03:13.085 Build targets in project: 8 00:03:13.085 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:03:13.085 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:03:13.085 00:03:13.085 libvfio-user 0.0.1 00:03:13.085 00:03:13.085 User defined options 00:03:13.085 buildtype : debug 00:03:13.085 default_library: shared 00:03:13.085 libdir : /usr/local/lib 00:03:13.085 00:03:13.085 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:03:13.649 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:13.907 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:03:13.907 [2/37] Compiling C object samples/null.p/null.c.o 00:03:13.907 [3/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:03:13.907 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:03:13.907 [5/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:03:13.907 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:03:13.907 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:03:13.907 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:03:13.908 [9/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:03:13.908 [10/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:03:13.908 [11/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:03:13.908 [12/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:03:13.908 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:03:13.908 [14/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:03:13.908 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:03:13.908 [16/37] Compiling C object test/unit_tests.p/mocks.c.o 00:03:13.908 [17/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:03:13.908 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:03:13.908 [19/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:03:13.908 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:03:13.908 [21/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:03:13.908 [22/37] Compiling C object samples/server.p/server.c.o 00:03:13.908 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:03:13.908 [24/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:03:13.908 [25/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:03:13.908 [26/37] Compiling C object samples/client.p/client.c.o 00:03:13.908 [27/37] Linking target samples/client 00:03:13.908 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:03:13.908 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:03:14.167 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:03:14.167 [31/37] Linking target test/unit_tests 00:03:14.167 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:03:14.167 [33/37] Linking target samples/shadow_ioeventfd_server 00:03:14.167 [34/37] Linking target samples/null 00:03:14.167 [35/37] Linking target samples/lspci 00:03:14.167 [36/37] Linking target samples/gpio-pci-idio-16 00:03:14.167 [37/37] Linking target samples/server 00:03:14.167 INFO: autodetecting backend as ninja 00:03:14.167 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:14.425 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:03:14.684 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:03:14.684 ninja: no work to do. 00:03:46.775 CC lib/ut_mock/mock.o 00:03:46.775 CC lib/log/log.o 00:03:46.775 CC lib/ut/ut.o 00:03:46.775 CC lib/log/log_flags.o 00:03:46.775 CC lib/log/log_deprecated.o 00:03:46.775 LIB libspdk_ut.a 00:03:46.775 LIB libspdk_ut_mock.a 00:03:46.775 LIB libspdk_log.a 00:03:46.775 SO libspdk_ut.so.2.0 00:03:46.775 SO libspdk_ut_mock.so.6.0 00:03:46.775 SO libspdk_log.so.7.1 00:03:46.775 SYMLINK libspdk_ut.so 00:03:46.775 SYMLINK libspdk_ut_mock.so 00:03:46.775 SYMLINK libspdk_log.so 00:03:46.775 CC lib/util/base64.o 00:03:46.775 CC lib/util/bit_array.o 00:03:46.775 CC lib/util/cpuset.o 00:03:46.775 CC lib/util/crc16.o 00:03:46.775 CC lib/util/crc32.o 00:03:46.775 CC lib/util/crc32c.o 00:03:46.775 CC lib/util/fd.o 00:03:46.775 CC lib/util/crc32_ieee.o 00:03:46.775 CC lib/util/crc64.o 00:03:46.775 CC lib/util/dif.o 00:03:46.775 CC lib/util/fd_group.o 00:03:46.775 CC lib/util/file.o 00:03:46.775 CC lib/dma/dma.o 00:03:46.775 CC lib/util/hexlify.o 00:03:46.775 CXX lib/trace_parser/trace.o 00:03:46.775 CC lib/util/iov.o 00:03:46.775 CC lib/ioat/ioat.o 00:03:46.775 CC lib/util/math.o 00:03:46.775 CC lib/util/net.o 00:03:46.775 CC lib/util/pipe.o 00:03:46.775 CC lib/util/strerror_tls.o 00:03:46.775 CC lib/util/string.o 00:03:46.775 CC lib/util/uuid.o 00:03:46.775 CC lib/util/xor.o 00:03:46.775 CC lib/util/zipf.o 00:03:46.775 CC lib/util/md5.o 00:03:46.775 CC lib/vfio_user/host/vfio_user_pci.o 00:03:46.775 CC lib/vfio_user/host/vfio_user.o 00:03:46.775 LIB libspdk_dma.a 00:03:46.775 SO libspdk_dma.so.5.0 00:03:46.775 SYMLINK libspdk_dma.so 00:03:46.775 LIB libspdk_ioat.a 00:03:46.775 SO libspdk_ioat.so.7.0 00:03:46.775 LIB libspdk_vfio_user.a 00:03:46.775 SYMLINK libspdk_ioat.so 00:03:46.775 SO libspdk_vfio_user.so.5.0 00:03:46.775 SYMLINK libspdk_vfio_user.so 00:03:46.775 LIB libspdk_util.a 00:03:46.775 SO libspdk_util.so.10.1 00:03:46.775 SYMLINK libspdk_util.so 00:03:46.775 CC lib/conf/conf.o 00:03:46.775 CC lib/vmd/vmd.o 00:03:46.775 CC lib/vmd/led.o 00:03:46.775 CC lib/json/json_parse.o 00:03:46.775 CC lib/json/json_util.o 00:03:46.775 CC lib/json/json_write.o 00:03:46.775 CC lib/rdma_utils/rdma_utils.o 00:03:46.775 CC lib/idxd/idxd.o 00:03:46.775 CC lib/env_dpdk/env.o 00:03:46.775 CC lib/idxd/idxd_user.o 00:03:46.775 CC lib/env_dpdk/memory.o 00:03:46.775 CC lib/idxd/idxd_kernel.o 00:03:46.775 CC lib/env_dpdk/pci.o 00:03:46.775 CC lib/env_dpdk/init.o 00:03:46.775 CC lib/env_dpdk/threads.o 00:03:46.775 CC lib/env_dpdk/pci_ioat.o 00:03:46.775 CC lib/env_dpdk/pci_virtio.o 00:03:46.775 CC lib/env_dpdk/pci_vmd.o 00:03:46.775 CC lib/env_dpdk/pci_idxd.o 00:03:46.775 CC lib/env_dpdk/pci_event.o 00:03:46.775 CC lib/env_dpdk/sigbus_handler.o 00:03:46.775 CC lib/env_dpdk/pci_dpdk.o 00:03:46.775 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:46.775 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:46.775 LIB libspdk_conf.a 00:03:46.775 SO libspdk_conf.so.6.0 00:03:46.775 SYMLINK libspdk_conf.so 00:03:46.775 LIB libspdk_rdma_utils.a 00:03:46.775 LIB libspdk_json.a 00:03:46.775 SO libspdk_rdma_utils.so.1.0 00:03:46.775 SO libspdk_json.so.6.0 00:03:46.775 SYMLINK libspdk_rdma_utils.so 00:03:46.775 SYMLINK libspdk_json.so 00:03:46.775 LIB libspdk_idxd.a 00:03:46.775 SO libspdk_idxd.so.12.1 00:03:46.775 LIB libspdk_vmd.a 00:03:46.775 SO libspdk_vmd.so.6.0 00:03:46.775 SYMLINK libspdk_idxd.so 00:03:46.775 SYMLINK libspdk_vmd.so 00:03:46.775 LIB libspdk_trace_parser.a 00:03:46.775 SO libspdk_trace_parser.so.6.0 00:03:46.775 CC lib/rdma_provider/common.o 00:03:46.775 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:46.775 CC lib/jsonrpc/jsonrpc_server.o 00:03:46.775 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:46.775 CC lib/jsonrpc/jsonrpc_client.o 00:03:46.775 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:46.775 SYMLINK libspdk_trace_parser.so 00:03:46.775 LIB libspdk_rdma_provider.a 00:03:46.775 LIB libspdk_jsonrpc.a 00:03:46.775 SO libspdk_rdma_provider.so.7.0 00:03:46.775 SO libspdk_jsonrpc.so.6.0 00:03:46.775 SYMLINK libspdk_rdma_provider.so 00:03:46.775 LIB libspdk_env_dpdk.a 00:03:46.775 SYMLINK libspdk_jsonrpc.so 00:03:46.775 SO libspdk_env_dpdk.so.15.1 00:03:46.775 SYMLINK libspdk_env_dpdk.so 00:03:46.775 CC lib/rpc/rpc.o 00:03:46.775 LIB libspdk_rpc.a 00:03:46.775 SO libspdk_rpc.so.6.0 00:03:46.775 SYMLINK libspdk_rpc.so 00:03:46.775 CC lib/notify/notify.o 00:03:46.775 CC lib/notify/notify_rpc.o 00:03:46.775 CC lib/trace/trace.o 00:03:46.775 CC lib/trace/trace_flags.o 00:03:46.775 CC lib/trace/trace_rpc.o 00:03:46.775 CC lib/keyring/keyring.o 00:03:46.775 CC lib/keyring/keyring_rpc.o 00:03:46.775 LIB libspdk_notify.a 00:03:46.775 SO libspdk_notify.so.6.0 00:03:46.775 LIB libspdk_keyring.a 00:03:46.775 LIB libspdk_trace.a 00:03:46.775 SYMLINK libspdk_notify.so 00:03:46.775 SO libspdk_keyring.so.2.0 00:03:46.775 SO libspdk_trace.so.11.0 00:03:46.775 SYMLINK libspdk_keyring.so 00:03:46.775 SYMLINK libspdk_trace.so 00:03:46.775 CC lib/thread/thread.o 00:03:46.775 CC lib/thread/iobuf.o 00:03:46.775 CC lib/sock/sock.o 00:03:46.775 CC lib/sock/sock_rpc.o 00:03:46.775 LIB libspdk_sock.a 00:03:46.775 SO libspdk_sock.so.10.0 00:03:46.775 SYMLINK libspdk_sock.so 00:03:46.775 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:46.775 CC lib/nvme/nvme_ctrlr.o 00:03:46.775 CC lib/nvme/nvme_fabric.o 00:03:46.775 CC lib/nvme/nvme_ns_cmd.o 00:03:46.775 CC lib/nvme/nvme_ns.o 00:03:46.775 CC lib/nvme/nvme_pcie_common.o 00:03:46.775 CC lib/nvme/nvme_pcie.o 00:03:46.775 CC lib/nvme/nvme_qpair.o 00:03:46.775 CC lib/nvme/nvme.o 00:03:46.775 CC lib/nvme/nvme_quirks.o 00:03:46.775 CC lib/nvme/nvme_transport.o 00:03:46.775 CC lib/nvme/nvme_discovery.o 00:03:46.775 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:46.775 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:46.775 CC lib/nvme/nvme_tcp.o 00:03:46.775 CC lib/nvme/nvme_opal.o 00:03:46.775 CC lib/nvme/nvme_io_msg.o 00:03:46.775 CC lib/nvme/nvme_poll_group.o 00:03:46.775 CC lib/nvme/nvme_zns.o 00:03:46.775 CC lib/nvme/nvme_stubs.o 00:03:46.775 CC lib/nvme/nvme_auth.o 00:03:46.775 CC lib/nvme/nvme_cuse.o 00:03:46.775 CC lib/nvme/nvme_vfio_user.o 00:03:46.775 CC lib/nvme/nvme_rdma.o 00:03:47.034 LIB libspdk_thread.a 00:03:47.034 SO libspdk_thread.so.11.0 00:03:47.034 SYMLINK libspdk_thread.so 00:03:47.294 CC lib/virtio/virtio.o 00:03:47.294 CC lib/virtio/virtio_vhost_user.o 00:03:47.294 CC lib/virtio/virtio_vfio_user.o 00:03:47.294 CC lib/virtio/virtio_pci.o 00:03:47.294 CC lib/vfu_tgt/tgt_rpc.o 00:03:47.294 CC lib/blob/blobstore.o 00:03:47.294 CC lib/vfu_tgt/tgt_endpoint.o 00:03:47.294 CC lib/blob/request.o 00:03:47.294 CC lib/blob/blob_bs_dev.o 00:03:47.294 CC lib/blob/zeroes.o 00:03:47.294 CC lib/accel/accel_sw.o 00:03:47.294 CC lib/accel/accel.o 00:03:47.294 CC lib/accel/accel_rpc.o 00:03:47.294 CC lib/fsdev/fsdev.o 00:03:47.294 CC lib/fsdev/fsdev_io.o 00:03:47.294 CC lib/fsdev/fsdev_rpc.o 00:03:47.294 CC lib/init/subsystem.o 00:03:47.294 CC lib/init/json_config.o 00:03:47.294 CC lib/init/subsystem_rpc.o 00:03:47.294 CC lib/init/rpc.o 00:03:47.553 LIB libspdk_init.a 00:03:47.553 SO libspdk_init.so.6.0 00:03:47.553 LIB libspdk_virtio.a 00:03:47.553 LIB libspdk_vfu_tgt.a 00:03:47.811 SO libspdk_vfu_tgt.so.3.0 00:03:47.811 SO libspdk_virtio.so.7.0 00:03:47.811 SYMLINK libspdk_init.so 00:03:47.811 SYMLINK libspdk_vfu_tgt.so 00:03:47.811 SYMLINK libspdk_virtio.so 00:03:47.811 LIB libspdk_fsdev.a 00:03:47.811 SO libspdk_fsdev.so.2.0 00:03:48.070 SYMLINK libspdk_fsdev.so 00:03:48.070 CC lib/event/app.o 00:03:48.070 CC lib/event/reactor.o 00:03:48.070 CC lib/event/log_rpc.o 00:03:48.070 CC lib/event/app_rpc.o 00:03:48.070 CC lib/event/scheduler_static.o 00:03:48.328 LIB libspdk_accel.a 00:03:48.328 SO libspdk_accel.so.16.0 00:03:48.328 SYMLINK libspdk_accel.so 00:03:48.328 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:48.328 LIB libspdk_nvme.a 00:03:48.328 LIB libspdk_event.a 00:03:48.328 SO libspdk_event.so.14.0 00:03:48.588 SO libspdk_nvme.so.15.0 00:03:48.588 SYMLINK libspdk_event.so 00:03:48.588 CC lib/bdev/bdev.o 00:03:48.588 CC lib/bdev/bdev_rpc.o 00:03:48.588 SYMLINK libspdk_nvme.so 00:03:48.588 CC lib/bdev/bdev_zone.o 00:03:48.588 CC lib/bdev/part.o 00:03:48.588 CC lib/bdev/scsi_nvme.o 00:03:48.848 LIB libspdk_fuse_dispatcher.a 00:03:48.848 SO libspdk_fuse_dispatcher.so.1.0 00:03:48.848 SYMLINK libspdk_fuse_dispatcher.so 00:03:49.418 LIB libspdk_blob.a 00:03:49.418 SO libspdk_blob.so.12.0 00:03:49.678 SYMLINK libspdk_blob.so 00:03:49.938 CC lib/lvol/lvol.o 00:03:49.938 CC lib/blobfs/blobfs.o 00:03:49.938 CC lib/blobfs/tree.o 00:03:50.506 LIB libspdk_bdev.a 00:03:50.506 SO libspdk_bdev.so.17.0 00:03:50.506 LIB libspdk_blobfs.a 00:03:50.506 SO libspdk_blobfs.so.11.0 00:03:50.506 LIB libspdk_lvol.a 00:03:50.506 SYMLINK libspdk_bdev.so 00:03:50.766 SYMLINK libspdk_blobfs.so 00:03:50.766 SO libspdk_lvol.so.11.0 00:03:50.766 SYMLINK libspdk_lvol.so 00:03:51.027 CC lib/ftl/ftl_core.o 00:03:51.027 CC lib/ftl/ftl_init.o 00:03:51.027 CC lib/ftl/ftl_layout.o 00:03:51.027 CC lib/ftl/ftl_debug.o 00:03:51.027 CC lib/ftl/ftl_io.o 00:03:51.027 CC lib/ftl/ftl_sb.o 00:03:51.027 CC lib/ftl/ftl_l2p.o 00:03:51.027 CC lib/ftl/ftl_l2p_flat.o 00:03:51.027 CC lib/ftl/ftl_nv_cache.o 00:03:51.027 CC lib/ftl/ftl_band.o 00:03:51.027 CC lib/ftl/ftl_band_ops.o 00:03:51.027 CC lib/ftl/ftl_writer.o 00:03:51.027 CC lib/ftl/ftl_rq.o 00:03:51.027 CC lib/ftl/ftl_reloc.o 00:03:51.027 CC lib/ftl/ftl_l2p_cache.o 00:03:51.027 CC lib/ftl/ftl_p2l.o 00:03:51.027 CC lib/nvmf/ctrlr.o 00:03:51.027 CC lib/nvmf/ctrlr_discovery.o 00:03:51.027 CC lib/ftl/ftl_p2l_log.o 00:03:51.027 CC lib/nvmf/ctrlr_bdev.o 00:03:51.027 CC lib/ftl/mngt/ftl_mngt.o 00:03:51.027 CC lib/nvmf/subsystem.o 00:03:51.027 CC lib/ublk/ublk.o 00:03:51.027 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:51.027 CC lib/nbd/nbd.o 00:03:51.027 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:51.027 CC lib/nvmf/nvmf.o 00:03:51.027 CC lib/scsi/lun.o 00:03:51.027 CC lib/ublk/ublk_rpc.o 00:03:51.027 CC lib/scsi/dev.o 00:03:51.027 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:51.027 CC lib/nvmf/nvmf_rpc.o 00:03:51.027 CC lib/nbd/nbd_rpc.o 00:03:51.027 CC lib/scsi/port.o 00:03:51.027 CC lib/nvmf/tcp.o 00:03:51.027 CC lib/scsi/scsi.o 00:03:51.027 CC lib/nvmf/transport.o 00:03:51.027 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:51.027 CC lib/nvmf/stubs.o 00:03:51.027 CC lib/scsi/scsi_pr.o 00:03:51.027 CC lib/nvmf/mdns_server.o 00:03:51.027 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:51.027 CC lib/nvmf/vfio_user.o 00:03:51.027 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:51.027 CC lib/scsi/scsi_bdev.o 00:03:51.027 CC lib/scsi/task.o 00:03:51.027 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:51.027 CC lib/scsi/scsi_rpc.o 00:03:51.027 CC lib/nvmf/rdma.o 00:03:51.027 CC lib/nvmf/auth.o 00:03:51.027 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:51.027 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:51.027 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:51.027 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:51.027 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:51.027 CC lib/ftl/utils/ftl_conf.o 00:03:51.027 CC lib/ftl/utils/ftl_md.o 00:03:51.027 CC lib/ftl/utils/ftl_bitmap.o 00:03:51.027 CC lib/ftl/utils/ftl_mempool.o 00:03:51.027 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:51.027 CC lib/ftl/utils/ftl_property.o 00:03:51.027 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:51.027 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:51.027 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:51.027 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:51.027 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:51.027 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:51.027 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:51.027 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:51.027 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:51.027 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:51.027 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:51.027 CC lib/ftl/base/ftl_base_dev.o 00:03:51.027 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:51.027 CC lib/ftl/base/ftl_base_bdev.o 00:03:51.027 CC lib/ftl/ftl_trace.o 00:03:51.596 LIB libspdk_nbd.a 00:03:51.596 LIB libspdk_scsi.a 00:03:51.596 SO libspdk_nbd.so.7.0 00:03:51.596 SO libspdk_scsi.so.9.0 00:03:51.854 SYMLINK libspdk_nbd.so 00:03:51.854 SYMLINK libspdk_scsi.so 00:03:51.854 LIB libspdk_ublk.a 00:03:51.854 SO libspdk_ublk.so.3.0 00:03:51.854 LIB libspdk_ftl.a 00:03:51.854 SYMLINK libspdk_ublk.so 00:03:52.113 SO libspdk_ftl.so.9.0 00:03:52.113 CC lib/vhost/vhost.o 00:03:52.113 CC lib/vhost/vhost_rpc.o 00:03:52.113 CC lib/vhost/vhost_scsi.o 00:03:52.113 CC lib/vhost/vhost_blk.o 00:03:52.113 CC lib/vhost/rte_vhost_user.o 00:03:52.113 CC lib/iscsi/iscsi.o 00:03:52.113 CC lib/iscsi/conn.o 00:03:52.113 CC lib/iscsi/param.o 00:03:52.113 CC lib/iscsi/init_grp.o 00:03:52.113 CC lib/iscsi/iscsi_subsystem.o 00:03:52.113 CC lib/iscsi/portal_grp.o 00:03:52.113 CC lib/iscsi/tgt_node.o 00:03:52.113 CC lib/iscsi/iscsi_rpc.o 00:03:52.113 CC lib/iscsi/task.o 00:03:52.373 SYMLINK libspdk_ftl.so 00:03:52.941 LIB libspdk_nvmf.a 00:03:52.941 LIB libspdk_vhost.a 00:03:52.941 SO libspdk_nvmf.so.20.0 00:03:52.941 SO libspdk_vhost.so.8.0 00:03:53.201 SYMLINK libspdk_vhost.so 00:03:53.201 SYMLINK libspdk_nvmf.so 00:03:53.201 LIB libspdk_iscsi.a 00:03:53.201 SO libspdk_iscsi.so.8.0 00:03:53.201 SYMLINK libspdk_iscsi.so 00:03:53.771 CC module/env_dpdk/env_dpdk_rpc.o 00:03:53.771 CC module/vfu_device/vfu_virtio.o 00:03:53.771 CC module/vfu_device/vfu_virtio_blk.o 00:03:53.771 CC module/vfu_device/vfu_virtio_scsi.o 00:03:53.771 CC module/vfu_device/vfu_virtio_rpc.o 00:03:53.771 CC module/vfu_device/vfu_virtio_fs.o 00:03:54.030 LIB libspdk_env_dpdk_rpc.a 00:03:54.030 CC module/accel/error/accel_error.o 00:03:54.030 CC module/accel/error/accel_error_rpc.o 00:03:54.030 CC module/scheduler/gscheduler/gscheduler.o 00:03:54.030 CC module/accel/iaa/accel_iaa.o 00:03:54.030 CC module/accel/iaa/accel_iaa_rpc.o 00:03:54.030 CC module/accel/ioat/accel_ioat.o 00:03:54.030 CC module/accel/ioat/accel_ioat_rpc.o 00:03:54.030 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:54.030 CC module/blob/bdev/blob_bdev.o 00:03:54.030 CC module/accel/dsa/accel_dsa.o 00:03:54.030 CC module/keyring/file/keyring.o 00:03:54.030 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:54.030 CC module/keyring/linux/keyring.o 00:03:54.030 CC module/accel/dsa/accel_dsa_rpc.o 00:03:54.030 CC module/keyring/linux/keyring_rpc.o 00:03:54.030 CC module/keyring/file/keyring_rpc.o 00:03:54.030 CC module/fsdev/aio/fsdev_aio.o 00:03:54.030 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:54.030 SO libspdk_env_dpdk_rpc.so.6.0 00:03:54.030 CC module/fsdev/aio/linux_aio_mgr.o 00:03:54.030 CC module/sock/posix/posix.o 00:03:54.030 SYMLINK libspdk_env_dpdk_rpc.so 00:03:54.290 LIB libspdk_scheduler_gscheduler.a 00:03:54.290 LIB libspdk_keyring_file.a 00:03:54.290 LIB libspdk_keyring_linux.a 00:03:54.290 LIB libspdk_accel_ioat.a 00:03:54.290 LIB libspdk_accel_error.a 00:03:54.290 LIB libspdk_scheduler_dpdk_governor.a 00:03:54.290 SO libspdk_scheduler_gscheduler.so.4.0 00:03:54.290 SO libspdk_keyring_file.so.2.0 00:03:54.290 SO libspdk_keyring_linux.so.1.0 00:03:54.290 LIB libspdk_accel_iaa.a 00:03:54.290 SO libspdk_accel_ioat.so.6.0 00:03:54.290 SO libspdk_accel_error.so.2.0 00:03:54.290 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:54.290 LIB libspdk_scheduler_dynamic.a 00:03:54.290 SO libspdk_accel_iaa.so.3.0 00:03:54.290 SYMLINK libspdk_scheduler_gscheduler.so 00:03:54.290 SO libspdk_scheduler_dynamic.so.4.0 00:03:54.290 SYMLINK libspdk_keyring_file.so 00:03:54.290 SYMLINK libspdk_keyring_linux.so 00:03:54.290 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:54.290 LIB libspdk_blob_bdev.a 00:03:54.290 SYMLINK libspdk_accel_ioat.so 00:03:54.290 LIB libspdk_accel_dsa.a 00:03:54.290 SYMLINK libspdk_accel_error.so 00:03:54.290 SO libspdk_blob_bdev.so.12.0 00:03:54.290 SYMLINK libspdk_accel_iaa.so 00:03:54.290 SYMLINK libspdk_scheduler_dynamic.so 00:03:54.290 SO libspdk_accel_dsa.so.5.0 00:03:54.290 SYMLINK libspdk_blob_bdev.so 00:03:54.548 LIB libspdk_vfu_device.a 00:03:54.548 SYMLINK libspdk_accel_dsa.so 00:03:54.548 SO libspdk_vfu_device.so.3.0 00:03:54.548 SYMLINK libspdk_vfu_device.so 00:03:54.548 LIB libspdk_fsdev_aio.a 00:03:54.548 SO libspdk_fsdev_aio.so.1.0 00:03:54.808 LIB libspdk_sock_posix.a 00:03:54.808 SO libspdk_sock_posix.so.6.0 00:03:54.808 SYMLINK libspdk_fsdev_aio.so 00:03:54.808 SYMLINK libspdk_sock_posix.so 00:03:54.808 CC module/bdev/gpt/gpt.o 00:03:54.808 CC module/bdev/gpt/vbdev_gpt.o 00:03:54.808 CC module/bdev/malloc/bdev_malloc.o 00:03:54.808 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:54.808 CC module/bdev/nvme/bdev_nvme.o 00:03:55.068 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:55.068 CC module/bdev/nvme/nvme_rpc.o 00:03:55.068 CC module/bdev/raid/bdev_raid.o 00:03:55.068 CC module/bdev/nvme/bdev_mdns_client.o 00:03:55.068 CC module/bdev/delay/vbdev_delay.o 00:03:55.068 CC module/bdev/nvme/vbdev_opal.o 00:03:55.068 CC module/bdev/raid/bdev_raid_rpc.o 00:03:55.068 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:55.068 CC module/bdev/raid/bdev_raid_sb.o 00:03:55.068 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:55.068 CC module/bdev/raid/raid0.o 00:03:55.068 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:55.068 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:55.068 CC module/bdev/raid/raid1.o 00:03:55.068 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:55.068 CC module/blobfs/bdev/blobfs_bdev.o 00:03:55.068 CC module/bdev/raid/concat.o 00:03:55.068 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:55.068 CC module/bdev/split/vbdev_split.o 00:03:55.068 CC module/bdev/split/vbdev_split_rpc.o 00:03:55.068 CC module/bdev/null/bdev_null.o 00:03:55.068 CC module/bdev/error/vbdev_error.o 00:03:55.068 CC module/bdev/error/vbdev_error_rpc.o 00:03:55.068 CC module/bdev/lvol/vbdev_lvol.o 00:03:55.068 CC module/bdev/null/bdev_null_rpc.o 00:03:55.068 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:55.068 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:55.068 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:55.068 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:55.068 CC module/bdev/aio/bdev_aio_rpc.o 00:03:55.068 CC module/bdev/aio/bdev_aio.o 00:03:55.068 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:55.068 CC module/bdev/passthru/vbdev_passthru.o 00:03:55.068 CC module/bdev/iscsi/bdev_iscsi.o 00:03:55.068 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:55.068 CC module/bdev/ftl/bdev_ftl.o 00:03:55.068 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:55.327 LIB libspdk_blobfs_bdev.a 00:03:55.327 SO libspdk_blobfs_bdev.so.6.0 00:03:55.327 LIB libspdk_bdev_split.a 00:03:55.327 LIB libspdk_bdev_error.a 00:03:55.327 LIB libspdk_bdev_gpt.a 00:03:55.327 LIB libspdk_bdev_null.a 00:03:55.327 SYMLINK libspdk_blobfs_bdev.so 00:03:55.327 SO libspdk_bdev_split.so.6.0 00:03:55.327 SO libspdk_bdev_error.so.6.0 00:03:55.327 SO libspdk_bdev_gpt.so.6.0 00:03:55.327 SO libspdk_bdev_null.so.6.0 00:03:55.327 LIB libspdk_bdev_zone_block.a 00:03:55.327 LIB libspdk_bdev_ftl.a 00:03:55.327 LIB libspdk_bdev_malloc.a 00:03:55.327 LIB libspdk_bdev_passthru.a 00:03:55.327 SYMLINK libspdk_bdev_split.so 00:03:55.327 SO libspdk_bdev_ftl.so.6.0 00:03:55.327 SYMLINK libspdk_bdev_gpt.so 00:03:55.327 SO libspdk_bdev_zone_block.so.6.0 00:03:55.327 LIB libspdk_bdev_delay.a 00:03:55.327 SYMLINK libspdk_bdev_error.so 00:03:55.327 LIB libspdk_bdev_iscsi.a 00:03:55.327 SYMLINK libspdk_bdev_null.so 00:03:55.327 SO libspdk_bdev_malloc.so.6.0 00:03:55.327 LIB libspdk_bdev_aio.a 00:03:55.327 SO libspdk_bdev_passthru.so.6.0 00:03:55.327 SO libspdk_bdev_iscsi.so.6.0 00:03:55.327 SO libspdk_bdev_delay.so.6.0 00:03:55.327 SO libspdk_bdev_aio.so.6.0 00:03:55.327 SYMLINK libspdk_bdev_ftl.so 00:03:55.327 SYMLINK libspdk_bdev_zone_block.so 00:03:55.587 SYMLINK libspdk_bdev_malloc.so 00:03:55.587 SYMLINK libspdk_bdev_passthru.so 00:03:55.587 SYMLINK libspdk_bdev_iscsi.so 00:03:55.587 LIB libspdk_bdev_virtio.a 00:03:55.587 SYMLINK libspdk_bdev_delay.so 00:03:55.587 SYMLINK libspdk_bdev_aio.so 00:03:55.587 LIB libspdk_bdev_lvol.a 00:03:55.587 SO libspdk_bdev_virtio.so.6.0 00:03:55.587 SO libspdk_bdev_lvol.so.6.0 00:03:55.587 SYMLINK libspdk_bdev_virtio.so 00:03:55.587 SYMLINK libspdk_bdev_lvol.so 00:03:55.847 LIB libspdk_bdev_raid.a 00:03:55.847 SO libspdk_bdev_raid.so.6.0 00:03:56.106 SYMLINK libspdk_bdev_raid.so 00:03:57.046 LIB libspdk_bdev_nvme.a 00:03:57.046 SO libspdk_bdev_nvme.so.7.1 00:03:57.046 SYMLINK libspdk_bdev_nvme.so 00:03:57.616 CC module/event/subsystems/iobuf/iobuf.o 00:03:57.616 CC module/event/subsystems/vmd/vmd.o 00:03:57.616 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:57.616 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:57.616 CC module/event/subsystems/scheduler/scheduler.o 00:03:57.616 CC module/event/subsystems/keyring/keyring.o 00:03:57.616 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:57.616 CC module/event/subsystems/sock/sock.o 00:03:57.616 CC module/event/subsystems/fsdev/fsdev.o 00:03:57.876 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:57.876 LIB libspdk_event_keyring.a 00:03:57.876 LIB libspdk_event_vhost_blk.a 00:03:57.876 LIB libspdk_event_fsdev.a 00:03:57.876 LIB libspdk_event_scheduler.a 00:03:57.876 LIB libspdk_event_vmd.a 00:03:57.876 LIB libspdk_event_iobuf.a 00:03:57.876 LIB libspdk_event_vfu_tgt.a 00:03:57.876 LIB libspdk_event_sock.a 00:03:57.876 SO libspdk_event_vmd.so.6.0 00:03:57.876 SO libspdk_event_keyring.so.1.0 00:03:57.876 SO libspdk_event_scheduler.so.4.0 00:03:57.876 SO libspdk_event_fsdev.so.1.0 00:03:57.876 SO libspdk_event_vhost_blk.so.3.0 00:03:57.876 SO libspdk_event_vfu_tgt.so.3.0 00:03:57.876 SO libspdk_event_iobuf.so.3.0 00:03:57.876 SO libspdk_event_sock.so.5.0 00:03:57.876 SYMLINK libspdk_event_vhost_blk.so 00:03:57.876 SYMLINK libspdk_event_fsdev.so 00:03:57.876 SYMLINK libspdk_event_keyring.so 00:03:57.876 SYMLINK libspdk_event_scheduler.so 00:03:57.876 SYMLINK libspdk_event_vmd.so 00:03:57.876 SYMLINK libspdk_event_vfu_tgt.so 00:03:57.876 SYMLINK libspdk_event_sock.so 00:03:57.876 SYMLINK libspdk_event_iobuf.so 00:03:58.445 CC module/event/subsystems/accel/accel.o 00:03:58.445 LIB libspdk_event_accel.a 00:03:58.445 SO libspdk_event_accel.so.6.0 00:03:58.445 SYMLINK libspdk_event_accel.so 00:03:59.015 CC module/event/subsystems/bdev/bdev.o 00:03:59.015 LIB libspdk_event_bdev.a 00:03:59.015 SO libspdk_event_bdev.so.6.0 00:03:59.275 SYMLINK libspdk_event_bdev.so 00:03:59.535 CC module/event/subsystems/scsi/scsi.o 00:03:59.535 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:59.535 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:59.535 CC module/event/subsystems/nbd/nbd.o 00:03:59.535 CC module/event/subsystems/ublk/ublk.o 00:03:59.535 LIB libspdk_event_scsi.a 00:03:59.795 LIB libspdk_event_nbd.a 00:03:59.795 LIB libspdk_event_ublk.a 00:03:59.795 SO libspdk_event_ublk.so.3.0 00:03:59.795 SO libspdk_event_scsi.so.6.0 00:03:59.795 SO libspdk_event_nbd.so.6.0 00:03:59.795 LIB libspdk_event_nvmf.a 00:03:59.795 SYMLINK libspdk_event_ublk.so 00:03:59.795 SYMLINK libspdk_event_scsi.so 00:03:59.795 SYMLINK libspdk_event_nbd.so 00:03:59.795 SO libspdk_event_nvmf.so.6.0 00:03:59.795 SYMLINK libspdk_event_nvmf.so 00:04:00.054 CC module/event/subsystems/iscsi/iscsi.o 00:04:00.054 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:04:00.314 LIB libspdk_event_vhost_scsi.a 00:04:00.314 LIB libspdk_event_iscsi.a 00:04:00.314 SO libspdk_event_vhost_scsi.so.3.0 00:04:00.314 SO libspdk_event_iscsi.so.6.0 00:04:00.314 SYMLINK libspdk_event_vhost_scsi.so 00:04:00.314 SYMLINK libspdk_event_iscsi.so 00:04:00.574 SO libspdk.so.6.0 00:04:00.574 SYMLINK libspdk.so 00:04:00.834 CC test/rpc_client/rpc_client_test.o 00:04:00.834 CC app/spdk_top/spdk_top.o 00:04:00.834 CC app/trace_record/trace_record.o 00:04:00.834 CXX app/trace/trace.o 00:04:00.834 CC app/spdk_nvme_identify/identify.o 00:04:01.102 TEST_HEADER include/spdk/accel.h 00:04:01.102 TEST_HEADER include/spdk/accel_module.h 00:04:01.102 TEST_HEADER include/spdk/barrier.h 00:04:01.102 TEST_HEADER include/spdk/assert.h 00:04:01.102 TEST_HEADER include/spdk/base64.h 00:04:01.102 TEST_HEADER include/spdk/bdev_module.h 00:04:01.102 TEST_HEADER include/spdk/bdev.h 00:04:01.102 CC app/spdk_lspci/spdk_lspci.o 00:04:01.102 TEST_HEADER include/spdk/bdev_zone.h 00:04:01.102 CC app/spdk_nvme_perf/perf.o 00:04:01.102 TEST_HEADER include/spdk/bit_pool.h 00:04:01.102 CC app/spdk_nvme_discover/discovery_aer.o 00:04:01.102 TEST_HEADER include/spdk/blob_bdev.h 00:04:01.102 TEST_HEADER include/spdk/bit_array.h 00:04:01.102 TEST_HEADER include/spdk/blobfs_bdev.h 00:04:01.102 TEST_HEADER include/spdk/blob.h 00:04:01.102 TEST_HEADER include/spdk/blobfs.h 00:04:01.102 TEST_HEADER include/spdk/conf.h 00:04:01.102 TEST_HEADER include/spdk/config.h 00:04:01.102 TEST_HEADER include/spdk/cpuset.h 00:04:01.102 TEST_HEADER include/spdk/crc32.h 00:04:01.102 TEST_HEADER include/spdk/crc16.h 00:04:01.102 TEST_HEADER include/spdk/crc64.h 00:04:01.102 TEST_HEADER include/spdk/dif.h 00:04:01.102 TEST_HEADER include/spdk/dma.h 00:04:01.102 TEST_HEADER include/spdk/endian.h 00:04:01.102 TEST_HEADER include/spdk/env_dpdk.h 00:04:01.102 TEST_HEADER include/spdk/event.h 00:04:01.102 TEST_HEADER include/spdk/env.h 00:04:01.102 TEST_HEADER include/spdk/fd.h 00:04:01.102 TEST_HEADER include/spdk/fsdev.h 00:04:01.102 TEST_HEADER include/spdk/fd_group.h 00:04:01.102 TEST_HEADER include/spdk/file.h 00:04:01.102 TEST_HEADER include/spdk/fsdev_module.h 00:04:01.102 CC examples/interrupt_tgt/interrupt_tgt.o 00:04:01.102 TEST_HEADER include/spdk/ftl.h 00:04:01.102 TEST_HEADER include/spdk/gpt_spec.h 00:04:01.102 TEST_HEADER include/spdk/hexlify.h 00:04:01.102 TEST_HEADER include/spdk/histogram_data.h 00:04:01.102 TEST_HEADER include/spdk/idxd.h 00:04:01.102 CC app/nvmf_tgt/nvmf_main.o 00:04:01.102 TEST_HEADER include/spdk/idxd_spec.h 00:04:01.102 TEST_HEADER include/spdk/init.h 00:04:01.102 TEST_HEADER include/spdk/ioat.h 00:04:01.102 TEST_HEADER include/spdk/iscsi_spec.h 00:04:01.102 TEST_HEADER include/spdk/ioat_spec.h 00:04:01.102 TEST_HEADER include/spdk/json.h 00:04:01.102 TEST_HEADER include/spdk/jsonrpc.h 00:04:01.102 TEST_HEADER include/spdk/keyring.h 00:04:01.102 TEST_HEADER include/spdk/keyring_module.h 00:04:01.102 TEST_HEADER include/spdk/likely.h 00:04:01.102 TEST_HEADER include/spdk/log.h 00:04:01.102 TEST_HEADER include/spdk/md5.h 00:04:01.102 TEST_HEADER include/spdk/lvol.h 00:04:01.102 TEST_HEADER include/spdk/mmio.h 00:04:01.102 TEST_HEADER include/spdk/memory.h 00:04:01.102 TEST_HEADER include/spdk/nbd.h 00:04:01.102 TEST_HEADER include/spdk/net.h 00:04:01.102 TEST_HEADER include/spdk/nvme.h 00:04:01.102 TEST_HEADER include/spdk/notify.h 00:04:01.102 TEST_HEADER include/spdk/nvme_ocssd.h 00:04:01.102 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:04:01.102 TEST_HEADER include/spdk/nvme_intel.h 00:04:01.102 CC app/iscsi_tgt/iscsi_tgt.o 00:04:01.102 TEST_HEADER include/spdk/nvme_spec.h 00:04:01.102 CC app/spdk_dd/spdk_dd.o 00:04:01.102 TEST_HEADER include/spdk/nvme_zns.h 00:04:01.102 TEST_HEADER include/spdk/nvmf_cmd.h 00:04:01.102 TEST_HEADER include/spdk/nvmf.h 00:04:01.102 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:04:01.102 TEST_HEADER include/spdk/nvmf_spec.h 00:04:01.102 TEST_HEADER include/spdk/nvmf_transport.h 00:04:01.102 TEST_HEADER include/spdk/opal.h 00:04:01.102 TEST_HEADER include/spdk/opal_spec.h 00:04:01.102 TEST_HEADER include/spdk/pci_ids.h 00:04:01.102 TEST_HEADER include/spdk/pipe.h 00:04:01.102 TEST_HEADER include/spdk/queue.h 00:04:01.102 TEST_HEADER include/spdk/rpc.h 00:04:01.102 TEST_HEADER include/spdk/scheduler.h 00:04:01.102 TEST_HEADER include/spdk/reduce.h 00:04:01.102 TEST_HEADER include/spdk/scsi_spec.h 00:04:01.102 TEST_HEADER include/spdk/scsi.h 00:04:01.102 TEST_HEADER include/spdk/sock.h 00:04:01.102 TEST_HEADER include/spdk/stdinc.h 00:04:01.102 TEST_HEADER include/spdk/thread.h 00:04:01.102 TEST_HEADER include/spdk/string.h 00:04:01.102 TEST_HEADER include/spdk/trace.h 00:04:01.102 TEST_HEADER include/spdk/trace_parser.h 00:04:01.102 TEST_HEADER include/spdk/ublk.h 00:04:01.102 CC app/spdk_tgt/spdk_tgt.o 00:04:01.102 TEST_HEADER include/spdk/tree.h 00:04:01.102 TEST_HEADER include/spdk/util.h 00:04:01.102 TEST_HEADER include/spdk/uuid.h 00:04:01.102 TEST_HEADER include/spdk/version.h 00:04:01.102 TEST_HEADER include/spdk/vfio_user_pci.h 00:04:01.102 TEST_HEADER include/spdk/vfio_user_spec.h 00:04:01.102 TEST_HEADER include/spdk/vhost.h 00:04:01.102 TEST_HEADER include/spdk/xor.h 00:04:01.102 TEST_HEADER include/spdk/vmd.h 00:04:01.102 TEST_HEADER include/spdk/zipf.h 00:04:01.102 CXX test/cpp_headers/accel.o 00:04:01.102 CXX test/cpp_headers/accel_module.o 00:04:01.102 CXX test/cpp_headers/base64.o 00:04:01.102 CXX test/cpp_headers/assert.o 00:04:01.102 CXX test/cpp_headers/barrier.o 00:04:01.102 CXX test/cpp_headers/bdev.o 00:04:01.102 CXX test/cpp_headers/bdev_module.o 00:04:01.102 CXX test/cpp_headers/bit_array.o 00:04:01.102 CXX test/cpp_headers/bdev_zone.o 00:04:01.102 CXX test/cpp_headers/bit_pool.o 00:04:01.102 CXX test/cpp_headers/blobfs_bdev.o 00:04:01.102 CXX test/cpp_headers/blob_bdev.o 00:04:01.102 CXX test/cpp_headers/blobfs.o 00:04:01.102 CXX test/cpp_headers/blob.o 00:04:01.102 CXX test/cpp_headers/config.o 00:04:01.102 CXX test/cpp_headers/conf.o 00:04:01.102 CXX test/cpp_headers/cpuset.o 00:04:01.102 CXX test/cpp_headers/crc16.o 00:04:01.102 CXX test/cpp_headers/crc64.o 00:04:01.102 CXX test/cpp_headers/crc32.o 00:04:01.102 CXX test/cpp_headers/dif.o 00:04:01.102 CXX test/cpp_headers/endian.o 00:04:01.102 CXX test/cpp_headers/env_dpdk.o 00:04:01.102 CXX test/cpp_headers/dma.o 00:04:01.102 CXX test/cpp_headers/env.o 00:04:01.102 CXX test/cpp_headers/fd_group.o 00:04:01.102 CXX test/cpp_headers/event.o 00:04:01.102 CXX test/cpp_headers/fd.o 00:04:01.102 CXX test/cpp_headers/file.o 00:04:01.102 CXX test/cpp_headers/fsdev.o 00:04:01.102 CXX test/cpp_headers/fsdev_module.o 00:04:01.102 CXX test/cpp_headers/ftl.o 00:04:01.102 CXX test/cpp_headers/gpt_spec.o 00:04:01.102 CXX test/cpp_headers/hexlify.o 00:04:01.102 CXX test/cpp_headers/histogram_data.o 00:04:01.102 CXX test/cpp_headers/idxd.o 00:04:01.102 CXX test/cpp_headers/idxd_spec.o 00:04:01.102 CXX test/cpp_headers/init.o 00:04:01.102 CXX test/cpp_headers/ioat.o 00:04:01.102 CXX test/cpp_headers/iscsi_spec.o 00:04:01.102 CXX test/cpp_headers/ioat_spec.o 00:04:01.102 CXX test/cpp_headers/jsonrpc.o 00:04:01.102 CXX test/cpp_headers/keyring_module.o 00:04:01.103 CXX test/cpp_headers/json.o 00:04:01.103 CXX test/cpp_headers/keyring.o 00:04:01.103 CXX test/cpp_headers/likely.o 00:04:01.103 CXX test/cpp_headers/log.o 00:04:01.103 CXX test/cpp_headers/md5.o 00:04:01.103 CXX test/cpp_headers/lvol.o 00:04:01.103 CXX test/cpp_headers/memory.o 00:04:01.103 CXX test/cpp_headers/mmio.o 00:04:01.103 CXX test/cpp_headers/net.o 00:04:01.103 CXX test/cpp_headers/nbd.o 00:04:01.103 CXX test/cpp_headers/notify.o 00:04:01.103 CXX test/cpp_headers/nvme.o 00:04:01.103 CXX test/cpp_headers/nvme_ocssd.o 00:04:01.103 CXX test/cpp_headers/nvme_intel.o 00:04:01.103 CXX test/cpp_headers/nvme_ocssd_spec.o 00:04:01.103 CXX test/cpp_headers/nvme_spec.o 00:04:01.103 CXX test/cpp_headers/nvme_zns.o 00:04:01.103 CXX test/cpp_headers/nvmf_fc_spec.o 00:04:01.103 CXX test/cpp_headers/nvmf.o 00:04:01.103 CXX test/cpp_headers/nvmf_cmd.o 00:04:01.103 CXX test/cpp_headers/nvmf_spec.o 00:04:01.103 CXX test/cpp_headers/nvmf_transport.o 00:04:01.103 CC examples/util/zipf/zipf.o 00:04:01.103 CXX test/cpp_headers/opal.o 00:04:01.103 CXX test/cpp_headers/opal_spec.o 00:04:01.103 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:04:01.103 CXX test/cpp_headers/pci_ids.o 00:04:01.103 CC test/env/memory/memory_ut.o 00:04:01.103 CC examples/ioat/perf/perf.o 00:04:01.103 CC test/env/vtophys/vtophys.o 00:04:01.103 CC examples/ioat/verify/verify.o 00:04:01.383 CC test/app/histogram_perf/histogram_perf.o 00:04:01.383 CC test/app/jsoncat/jsoncat.o 00:04:01.383 CC test/thread/poller_perf/poller_perf.o 00:04:01.383 CC test/env/pci/pci_ut.o 00:04:01.383 CC test/app/stub/stub.o 00:04:01.383 CC app/fio/nvme/fio_plugin.o 00:04:01.383 CC test/dma/test_dma/test_dma.o 00:04:01.383 LINK spdk_lspci 00:04:01.383 CC test/app/bdev_svc/bdev_svc.o 00:04:01.383 CC app/fio/bdev/fio_plugin.o 00:04:01.383 LINK rpc_client_test 00:04:01.383 LINK nvmf_tgt 00:04:01.383 LINK interrupt_tgt 00:04:01.650 LINK spdk_nvme_discover 00:04:01.650 LINK spdk_trace_record 00:04:01.650 LINK iscsi_tgt 00:04:01.650 CC test/env/mem_callbacks/mem_callbacks.o 00:04:01.650 LINK spdk_tgt 00:04:01.650 LINK env_dpdk_post_init 00:04:01.910 CXX test/cpp_headers/pipe.o 00:04:01.910 CXX test/cpp_headers/queue.o 00:04:01.910 LINK histogram_perf 00:04:01.910 CXX test/cpp_headers/reduce.o 00:04:01.910 CXX test/cpp_headers/rpc.o 00:04:01.910 LINK poller_perf 00:04:01.910 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:04:01.910 CXX test/cpp_headers/scheduler.o 00:04:01.910 CXX test/cpp_headers/scsi.o 00:04:01.910 LINK zipf 00:04:01.910 CXX test/cpp_headers/scsi_spec.o 00:04:01.910 CXX test/cpp_headers/sock.o 00:04:01.910 CXX test/cpp_headers/stdinc.o 00:04:01.910 CXX test/cpp_headers/string.o 00:04:01.910 CXX test/cpp_headers/thread.o 00:04:01.910 CXX test/cpp_headers/trace.o 00:04:01.910 CXX test/cpp_headers/trace_parser.o 00:04:01.911 CXX test/cpp_headers/tree.o 00:04:01.911 CXX test/cpp_headers/ublk.o 00:04:01.911 CXX test/cpp_headers/util.o 00:04:01.911 CXX test/cpp_headers/uuid.o 00:04:01.911 CXX test/cpp_headers/version.o 00:04:01.911 CXX test/cpp_headers/vfio_user_pci.o 00:04:01.911 CXX test/cpp_headers/vfio_user_spec.o 00:04:01.911 CXX test/cpp_headers/vhost.o 00:04:01.911 LINK vtophys 00:04:01.911 CXX test/cpp_headers/vmd.o 00:04:01.911 CXX test/cpp_headers/xor.o 00:04:01.911 CXX test/cpp_headers/zipf.o 00:04:01.911 LINK jsoncat 00:04:01.911 LINK bdev_svc 00:04:01.911 LINK spdk_dd 00:04:01.911 LINK stub 00:04:01.911 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:04:01.911 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:04:01.911 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:04:01.911 LINK ioat_perf 00:04:01.911 LINK verify 00:04:01.911 LINK spdk_trace 00:04:02.170 LINK mem_callbacks 00:04:02.170 LINK pci_ut 00:04:02.170 LINK spdk_nvme_identify 00:04:02.170 LINK test_dma 00:04:02.429 LINK nvme_fuzz 00:04:02.429 CC test/event/event_perf/event_perf.o 00:04:02.429 CC test/event/reactor_perf/reactor_perf.o 00:04:02.429 CC test/event/reactor/reactor.o 00:04:02.429 CC test/event/app_repeat/app_repeat.o 00:04:02.429 CC examples/idxd/perf/perf.o 00:04:02.429 CC examples/sock/hello_world/hello_sock.o 00:04:02.429 CC examples/vmd/lsvmd/lsvmd.o 00:04:02.429 CC examples/vmd/led/led.o 00:04:02.429 CC examples/thread/thread/thread_ex.o 00:04:02.429 CC test/event/scheduler/scheduler.o 00:04:02.429 LINK spdk_nvme_perf 00:04:02.429 LINK vhost_fuzz 00:04:02.429 LINK spdk_nvme 00:04:02.429 LINK reactor 00:04:02.429 LINK spdk_bdev 00:04:02.429 CC app/vhost/vhost.o 00:04:02.429 LINK reactor_perf 00:04:02.429 LINK event_perf 00:04:02.429 LINK app_repeat 00:04:02.429 LINK spdk_top 00:04:02.429 LINK memory_ut 00:04:02.429 LINK lsvmd 00:04:02.429 LINK led 00:04:02.688 LINK hello_sock 00:04:02.688 LINK scheduler 00:04:02.688 LINK thread 00:04:02.688 LINK idxd_perf 00:04:02.688 LINK vhost 00:04:02.688 CC test/nvme/reserve/reserve.o 00:04:02.688 CC test/nvme/overhead/overhead.o 00:04:02.688 CC test/nvme/sgl/sgl.o 00:04:02.688 CC test/nvme/reset/reset.o 00:04:02.688 CC test/nvme/e2edp/nvme_dp.o 00:04:02.688 CC test/nvme/startup/startup.o 00:04:02.688 CC test/nvme/fdp/fdp.o 00:04:02.688 CC test/nvme/simple_copy/simple_copy.o 00:04:02.688 CC test/nvme/err_injection/err_injection.o 00:04:02.688 CC test/nvme/compliance/nvme_compliance.o 00:04:02.688 CC test/nvme/connect_stress/connect_stress.o 00:04:02.688 CC test/nvme/cuse/cuse.o 00:04:02.688 CC test/nvme/doorbell_aers/doorbell_aers.o 00:04:02.688 CC test/nvme/fused_ordering/fused_ordering.o 00:04:02.688 CC test/nvme/boot_partition/boot_partition.o 00:04:02.688 CC test/nvme/aer/aer.o 00:04:02.947 CC test/blobfs/mkfs/mkfs.o 00:04:02.947 CC test/accel/dif/dif.o 00:04:02.947 CC test/lvol/esnap/esnap.o 00:04:02.947 LINK startup 00:04:02.947 LINK reserve 00:04:02.947 LINK connect_stress 00:04:02.947 LINK boot_partition 00:04:02.947 LINK err_injection 00:04:02.947 LINK doorbell_aers 00:04:02.947 LINK fused_ordering 00:04:02.947 LINK simple_copy 00:04:02.947 LINK reset 00:04:02.947 LINK sgl 00:04:03.207 LINK overhead 00:04:03.207 LINK mkfs 00:04:03.207 LINK nvme_dp 00:04:03.207 CC examples/nvme/hello_world/hello_world.o 00:04:03.207 CC examples/nvme/abort/abort.o 00:04:03.207 CC examples/nvme/hotplug/hotplug.o 00:04:03.207 LINK aer 00:04:03.207 LINK nvme_compliance 00:04:03.207 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:04:03.207 CC examples/nvme/reconnect/reconnect.o 00:04:03.207 CC examples/nvme/nvme_manage/nvme_manage.o 00:04:03.207 CC examples/nvme/arbitration/arbitration.o 00:04:03.207 CC examples/nvme/cmb_copy/cmb_copy.o 00:04:03.207 LINK fdp 00:04:03.207 CC examples/accel/perf/accel_perf.o 00:04:03.207 CC examples/blob/cli/blobcli.o 00:04:03.207 CC examples/fsdev/hello_world/hello_fsdev.o 00:04:03.207 CC examples/blob/hello_world/hello_blob.o 00:04:03.207 LINK hello_world 00:04:03.207 LINK pmr_persistence 00:04:03.466 LINK cmb_copy 00:04:03.467 LINK hotplug 00:04:03.467 LINK abort 00:04:03.467 LINK hello_blob 00:04:03.467 LINK arbitration 00:04:03.467 LINK iscsi_fuzz 00:04:03.467 LINK dif 00:04:03.467 LINK reconnect 00:04:03.467 LINK hello_fsdev 00:04:03.467 LINK nvme_manage 00:04:03.726 LINK accel_perf 00:04:03.726 LINK blobcli 00:04:03.985 LINK cuse 00:04:03.986 CC test/bdev/bdevio/bdevio.o 00:04:04.244 CC examples/bdev/hello_world/hello_bdev.o 00:04:04.244 CC examples/bdev/bdevperf/bdevperf.o 00:04:04.244 LINK bdevio 00:04:04.503 LINK hello_bdev 00:04:04.763 LINK bdevperf 00:04:05.332 CC examples/nvmf/nvmf/nvmf.o 00:04:05.592 LINK nvmf 00:04:06.532 LINK esnap 00:04:06.791 00:04:06.791 real 0m55.509s 00:04:06.791 user 6m48.799s 00:04:06.791 sys 3m1.552s 00:04:06.791 05:04:20 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:04:06.791 05:04:20 make -- common/autotest_common.sh@10 -- $ set +x 00:04:06.791 ************************************ 00:04:06.791 END TEST make 00:04:06.791 ************************************ 00:04:06.791 05:04:20 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:04:06.791 05:04:20 -- pm/common@29 -- $ signal_monitor_resources TERM 00:04:06.791 05:04:20 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:04:06.791 05:04:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.791 05:04:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:04:06.791 05:04:20 -- pm/common@44 -- $ pid=7581 00:04:06.792 05:04:20 -- pm/common@50 -- $ kill -TERM 7581 00:04:06.792 05:04:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.792 05:04:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:04:06.792 05:04:20 -- pm/common@44 -- $ pid=7582 00:04:06.792 05:04:20 -- pm/common@50 -- $ kill -TERM 7582 00:04:06.792 05:04:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.792 05:04:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:04:06.792 05:04:20 -- pm/common@44 -- $ pid=7584 00:04:06.792 05:04:20 -- pm/common@50 -- $ kill -TERM 7584 00:04:06.792 05:04:20 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:04:06.792 05:04:20 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:04:06.792 05:04:20 -- pm/common@44 -- $ pid=7610 00:04:06.792 05:04:20 -- pm/common@50 -- $ sudo -E kill -TERM 7610 00:04:06.792 05:04:20 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:04:06.792 05:04:20 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:04:06.792 05:04:20 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:07.051 05:04:20 -- common/autotest_common.sh@1711 -- # lcov --version 00:04:07.051 05:04:20 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:07.051 05:04:20 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:07.051 05:04:20 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:07.051 05:04:20 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:07.051 05:04:20 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:07.051 05:04:20 -- scripts/common.sh@336 -- # IFS=.-: 00:04:07.051 05:04:20 -- scripts/common.sh@336 -- # read -ra ver1 00:04:07.051 05:04:20 -- scripts/common.sh@337 -- # IFS=.-: 00:04:07.051 05:04:20 -- scripts/common.sh@337 -- # read -ra ver2 00:04:07.051 05:04:20 -- scripts/common.sh@338 -- # local 'op=<' 00:04:07.051 05:04:20 -- scripts/common.sh@340 -- # ver1_l=2 00:04:07.051 05:04:20 -- scripts/common.sh@341 -- # ver2_l=1 00:04:07.051 05:04:20 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:07.051 05:04:20 -- scripts/common.sh@344 -- # case "$op" in 00:04:07.051 05:04:20 -- scripts/common.sh@345 -- # : 1 00:04:07.051 05:04:20 -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:07.051 05:04:20 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:07.051 05:04:20 -- scripts/common.sh@365 -- # decimal 1 00:04:07.051 05:04:20 -- scripts/common.sh@353 -- # local d=1 00:04:07.052 05:04:20 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:07.052 05:04:20 -- scripts/common.sh@355 -- # echo 1 00:04:07.052 05:04:20 -- scripts/common.sh@365 -- # ver1[v]=1 00:04:07.052 05:04:20 -- scripts/common.sh@366 -- # decimal 2 00:04:07.052 05:04:20 -- scripts/common.sh@353 -- # local d=2 00:04:07.052 05:04:20 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:07.052 05:04:20 -- scripts/common.sh@355 -- # echo 2 00:04:07.052 05:04:20 -- scripts/common.sh@366 -- # ver2[v]=2 00:04:07.052 05:04:20 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:07.052 05:04:20 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:07.052 05:04:20 -- scripts/common.sh@368 -- # return 0 00:04:07.052 05:04:20 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:07.052 05:04:20 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:07.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.052 --rc genhtml_branch_coverage=1 00:04:07.052 --rc genhtml_function_coverage=1 00:04:07.052 --rc genhtml_legend=1 00:04:07.052 --rc geninfo_all_blocks=1 00:04:07.052 --rc geninfo_unexecuted_blocks=1 00:04:07.052 00:04:07.052 ' 00:04:07.052 05:04:20 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:07.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.052 --rc genhtml_branch_coverage=1 00:04:07.052 --rc genhtml_function_coverage=1 00:04:07.052 --rc genhtml_legend=1 00:04:07.052 --rc geninfo_all_blocks=1 00:04:07.052 --rc geninfo_unexecuted_blocks=1 00:04:07.052 00:04:07.052 ' 00:04:07.052 05:04:20 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:07.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.052 --rc genhtml_branch_coverage=1 00:04:07.052 --rc genhtml_function_coverage=1 00:04:07.052 --rc genhtml_legend=1 00:04:07.052 --rc geninfo_all_blocks=1 00:04:07.052 --rc geninfo_unexecuted_blocks=1 00:04:07.052 00:04:07.052 ' 00:04:07.052 05:04:20 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:07.052 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:07.052 --rc genhtml_branch_coverage=1 00:04:07.052 --rc genhtml_function_coverage=1 00:04:07.052 --rc genhtml_legend=1 00:04:07.052 --rc geninfo_all_blocks=1 00:04:07.052 --rc geninfo_unexecuted_blocks=1 00:04:07.052 00:04:07.052 ' 00:04:07.052 05:04:20 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:07.052 05:04:20 -- nvmf/common.sh@7 -- # uname -s 00:04:07.052 05:04:20 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:07.052 05:04:20 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:07.052 05:04:20 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:07.052 05:04:20 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:07.052 05:04:20 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:07.052 05:04:20 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:07.052 05:04:20 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:07.052 05:04:20 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:07.052 05:04:20 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:07.052 05:04:20 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:07.052 05:04:20 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:04:07.052 05:04:20 -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:04:07.052 05:04:20 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:07.052 05:04:20 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:07.052 05:04:20 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:04:07.052 05:04:20 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:07.052 05:04:20 -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:07.052 05:04:20 -- scripts/common.sh@15 -- # shopt -s extglob 00:04:07.052 05:04:20 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:07.052 05:04:20 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:07.052 05:04:20 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:07.052 05:04:20 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.052 05:04:20 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.052 05:04:20 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.052 05:04:20 -- paths/export.sh@5 -- # export PATH 00:04:07.052 05:04:20 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:07.052 05:04:20 -- nvmf/common.sh@51 -- # : 0 00:04:07.052 05:04:20 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:07.052 05:04:20 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:07.052 05:04:20 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:07.052 05:04:20 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:07.052 05:04:20 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:07.052 05:04:20 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:07.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:07.052 05:04:20 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:07.052 05:04:20 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:07.052 05:04:20 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:07.052 05:04:20 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:04:07.052 05:04:20 -- spdk/autotest.sh@32 -- # uname -s 00:04:07.052 05:04:20 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:04:07.052 05:04:20 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:04:07.052 05:04:20 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:07.052 05:04:20 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:04:07.052 05:04:20 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:04:07.052 05:04:20 -- spdk/autotest.sh@44 -- # modprobe nbd 00:04:07.052 05:04:20 -- spdk/autotest.sh@46 -- # type -P udevadm 00:04:07.052 05:04:20 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:04:07.052 05:04:20 -- spdk/autotest.sh@48 -- # udevadm_pid=88043 00:04:07.052 05:04:20 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:04:07.052 05:04:20 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:04:07.052 05:04:20 -- pm/common@17 -- # local monitor 00:04:07.052 05:04:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.052 05:04:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.052 05:04:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.052 05:04:20 -- pm/common@21 -- # date +%s 00:04:07.052 05:04:20 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:04:07.052 05:04:20 -- pm/common@21 -- # date +%s 00:04:07.052 05:04:20 -- pm/common@21 -- # date +%s 00:04:07.052 05:04:20 -- pm/common@25 -- # sleep 1 00:04:07.052 05:04:20 -- pm/common@21 -- # date +%s 00:04:07.052 05:04:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734235460 00:04:07.052 05:04:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734235460 00:04:07.052 05:04:20 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734235460 00:04:07.052 05:04:20 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1734235460 00:04:07.052 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734235460_collect-cpu-load.pm.log 00:04:07.052 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734235460_collect-vmstat.pm.log 00:04:07.052 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734235460_collect-cpu-temp.pm.log 00:04:07.312 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1734235460_collect-bmc-pm.bmc.pm.log 00:04:08.252 05:04:21 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:04:08.252 05:04:21 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:04:08.252 05:04:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.252 05:04:21 -- common/autotest_common.sh@10 -- # set +x 00:04:08.252 05:04:21 -- spdk/autotest.sh@59 -- # create_test_list 00:04:08.252 05:04:21 -- common/autotest_common.sh@752 -- # xtrace_disable 00:04:08.252 05:04:21 -- common/autotest_common.sh@10 -- # set +x 00:04:08.252 05:04:21 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:04:08.252 05:04:21 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.252 05:04:21 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.252 05:04:21 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:04:08.252 05:04:21 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:08.252 05:04:21 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:04:08.252 05:04:21 -- common/autotest_common.sh@1457 -- # uname 00:04:08.252 05:04:21 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:04:08.252 05:04:21 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:04:08.252 05:04:21 -- common/autotest_common.sh@1477 -- # uname 00:04:08.252 05:04:21 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:04:08.252 05:04:21 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:04:08.252 05:04:21 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:04:08.252 lcov: LCOV version 1.15 00:04:08.252 05:04:21 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:04:26.353 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:04:26.353 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:04:32.982 05:04:46 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:04:32.982 05:04:46 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:32.982 05:04:46 -- common/autotest_common.sh@10 -- # set +x 00:04:32.982 05:04:46 -- spdk/autotest.sh@78 -- # rm -f 00:04:32.982 05:04:46 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:35.525 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:04:35.525 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:04:35.525 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:04:35.525 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:04:35.525 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:04:35.525 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:04:35.525 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:04:35.525 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:04:35.525 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:04:35.785 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:04:35.785 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:04:35.785 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:04:35.785 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:04:35.785 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:04:35.785 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:04:35.785 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:04:35.785 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:04:35.785 05:04:49 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:04:35.785 05:04:49 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:04:35.785 05:04:49 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:04:35.785 05:04:49 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:04:35.785 05:04:49 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:04:35.785 05:04:49 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:04:35.786 05:04:49 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:04:35.786 05:04:49 -- common/autotest_common.sh@1669 -- # bdf=0000:5e:00.0 00:04:35.786 05:04:49 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:04:35.786 05:04:49 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:04:35.786 05:04:49 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:04:35.786 05:04:49 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:04:35.786 05:04:49 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:04:35.786 05:04:49 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:04:35.786 05:04:49 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:04:35.786 05:04:49 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:04:35.786 05:04:49 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:04:35.786 05:04:49 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:04:35.786 05:04:49 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:04:36.046 No valid GPT data, bailing 00:04:36.046 05:04:49 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:04:36.046 05:04:49 -- scripts/common.sh@394 -- # pt= 00:04:36.046 05:04:49 -- scripts/common.sh@395 -- # return 1 00:04:36.046 05:04:49 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:04:36.046 1+0 records in 00:04:36.046 1+0 records out 00:04:36.046 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00145411 s, 721 MB/s 00:04:36.046 05:04:49 -- spdk/autotest.sh@105 -- # sync 00:04:36.046 05:04:49 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:04:36.046 05:04:49 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:04:36.046 05:04:49 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:04:41.364 05:04:54 -- spdk/autotest.sh@111 -- # uname -s 00:04:41.364 05:04:54 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:04:41.364 05:04:54 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:04:41.364 05:04:54 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:43.997 Hugepages 00:04:43.997 node hugesize free / total 00:04:43.997 node0 1048576kB 0 / 0 00:04:43.997 node0 2048kB 0 / 0 00:04:43.997 node1 1048576kB 0 / 0 00:04:43.997 node1 2048kB 0 / 0 00:04:43.997 00:04:43.997 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:43.997 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:43.997 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:43.997 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:43.997 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:43.997 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:43.997 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:43.997 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:43.997 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:43.997 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:43.997 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:43.997 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:44.263 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:44.263 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:44.263 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:44.263 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:44.263 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:44.263 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:44.263 05:04:57 -- spdk/autotest.sh@117 -- # uname -s 00:04:44.263 05:04:57 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:04:44.263 05:04:57 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:04:44.263 05:04:57 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:47.568 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:47.568 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:47.568 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:47.568 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:47.568 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:47.568 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:47.568 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:47.569 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:47.569 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:47.569 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:47.569 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:47.569 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:47.569 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:47.569 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:47.569 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:47.569 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:47.828 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:48.088 05:05:01 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:49.028 05:05:02 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:49.028 05:05:02 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:49.028 05:05:02 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:49.028 05:05:02 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:49.028 05:05:02 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:49.028 05:05:02 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:49.028 05:05:02 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:49.028 05:05:02 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:49.028 05:05:02 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:49.028 05:05:02 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:49.028 05:05:02 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:49.028 05:05:02 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:52.325 Waiting for block devices as requested 00:04:52.325 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:52.325 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:52.325 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:52.325 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:52.325 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:52.325 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:52.325 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:52.586 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:52.586 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:52.586 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:52.586 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:52.846 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:52.846 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:52.846 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:53.105 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:53.105 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:53.105 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:53.364 05:05:06 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:53.364 05:05:06 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:53.364 05:05:06 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:04:53.364 05:05:06 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:04:53.364 05:05:06 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:53.364 05:05:06 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:53.364 05:05:06 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:53.364 05:05:06 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:53.364 05:05:06 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:53.364 05:05:06 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:53.364 05:05:06 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:53.364 05:05:06 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:53.364 05:05:06 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:53.364 05:05:06 -- common/autotest_common.sh@1531 -- # oacs=' 0xf' 00:04:53.364 05:05:06 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:53.364 05:05:06 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:53.364 05:05:06 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:53.364 05:05:06 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:53.364 05:05:06 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:53.364 05:05:06 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:53.364 05:05:06 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:53.364 05:05:06 -- common/autotest_common.sh@1543 -- # continue 00:04:53.364 05:05:06 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:53.364 05:05:06 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:53.364 05:05:06 -- common/autotest_common.sh@10 -- # set +x 00:04:53.364 05:05:06 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:53.364 05:05:06 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:53.364 05:05:06 -- common/autotest_common.sh@10 -- # set +x 00:04:53.364 05:05:06 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:56.661 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:56.661 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:57.229 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:57.229 05:05:10 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:57.229 05:05:10 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:57.229 05:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:57.229 05:05:10 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:57.229 05:05:10 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:57.229 05:05:10 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:57.229 05:05:10 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:57.229 05:05:10 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:57.229 05:05:10 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:57.229 05:05:10 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:57.229 05:05:10 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:57.229 05:05:10 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:57.229 05:05:10 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:57.229 05:05:10 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:57.229 05:05:10 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:57.230 05:05:10 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:57.230 05:05:10 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:57.230 05:05:10 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:57.230 05:05:10 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:57.230 05:05:10 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:57.230 05:05:10 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:57.230 05:05:10 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:57.230 05:05:10 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:57.230 05:05:10 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:57.230 05:05:10 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:57.230 05:05:10 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:57.230 05:05:10 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=101998 00:04:57.230 05:05:10 -- common/autotest_common.sh@1585 -- # waitforlisten 101998 00:04:57.230 05:05:10 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:57.230 05:05:10 -- common/autotest_common.sh@835 -- # '[' -z 101998 ']' 00:04:57.230 05:05:10 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.230 05:05:10 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.230 05:05:10 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.230 05:05:10 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.230 05:05:10 -- common/autotest_common.sh@10 -- # set +x 00:04:57.489 [2024-12-15 05:05:10.953505] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:04:57.489 [2024-12-15 05:05:10.953555] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid101998 ] 00:04:57.489 [2024-12-15 05:05:11.028739] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.489 [2024-12-15 05:05:11.050766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.749 05:05:11 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.749 05:05:11 -- common/autotest_common.sh@868 -- # return 0 00:04:57.749 05:05:11 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:57.749 05:05:11 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:57.749 05:05:11 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:05:01.044 nvme0n1 00:05:01.044 05:05:14 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:05:01.044 [2024-12-15 05:05:14.445084] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:05:01.044 [2024-12-15 05:05:14.445112] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:05:01.044 request: 00:05:01.044 { 00:05:01.044 "nvme_ctrlr_name": "nvme0", 00:05:01.044 "password": "test", 00:05:01.044 "method": "bdev_nvme_opal_revert", 00:05:01.044 "req_id": 1 00:05:01.044 } 00:05:01.044 Got JSON-RPC error response 00:05:01.044 response: 00:05:01.044 { 00:05:01.044 "code": -32603, 00:05:01.044 "message": "Internal error" 00:05:01.044 } 00:05:01.044 05:05:14 -- common/autotest_common.sh@1591 -- # true 00:05:01.044 05:05:14 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:05:01.044 05:05:14 -- common/autotest_common.sh@1595 -- # killprocess 101998 00:05:01.044 05:05:14 -- common/autotest_common.sh@954 -- # '[' -z 101998 ']' 00:05:01.044 05:05:14 -- common/autotest_common.sh@958 -- # kill -0 101998 00:05:01.044 05:05:14 -- common/autotest_common.sh@959 -- # uname 00:05:01.044 05:05:14 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.044 05:05:14 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 101998 00:05:01.044 05:05:14 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:01.044 05:05:14 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:01.044 05:05:14 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 101998' 00:05:01.044 killing process with pid 101998 00:05:01.044 05:05:14 -- common/autotest_common.sh@973 -- # kill 101998 00:05:01.044 05:05:14 -- common/autotest_common.sh@978 -- # wait 101998 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.044 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.045 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.046 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.046 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.046 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.046 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.046 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.046 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.046 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.046 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:01.046 EAL: Unexpected size 0 of DMA remapping cleared instead of 2097152 00:05:02.428 05:05:16 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:05:02.428 05:05:16 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:05:02.428 05:05:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:02.428 05:05:16 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:05:02.428 05:05:16 -- spdk/autotest.sh@149 -- # timing_enter lib 00:05:02.428 05:05:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:02.428 05:05:16 -- common/autotest_common.sh@10 -- # set +x 00:05:02.429 05:05:16 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:05:02.429 05:05:16 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:02.429 05:05:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.429 05:05:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.429 05:05:16 -- common/autotest_common.sh@10 -- # set +x 00:05:02.688 ************************************ 00:05:02.688 START TEST env 00:05:02.689 ************************************ 00:05:02.689 05:05:16 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:05:02.689 * Looking for test storage... 00:05:02.689 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:05:02.689 05:05:16 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:02.689 05:05:16 env -- common/autotest_common.sh@1711 -- # lcov --version 00:05:02.689 05:05:16 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:02.689 05:05:16 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:02.689 05:05:16 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:02.689 05:05:16 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:02.689 05:05:16 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:02.689 05:05:16 env -- scripts/common.sh@336 -- # IFS=.-: 00:05:02.689 05:05:16 env -- scripts/common.sh@336 -- # read -ra ver1 00:05:02.689 05:05:16 env -- scripts/common.sh@337 -- # IFS=.-: 00:05:02.689 05:05:16 env -- scripts/common.sh@337 -- # read -ra ver2 00:05:02.689 05:05:16 env -- scripts/common.sh@338 -- # local 'op=<' 00:05:02.689 05:05:16 env -- scripts/common.sh@340 -- # ver1_l=2 00:05:02.689 05:05:16 env -- scripts/common.sh@341 -- # ver2_l=1 00:05:02.689 05:05:16 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:02.689 05:05:16 env -- scripts/common.sh@344 -- # case "$op" in 00:05:02.689 05:05:16 env -- scripts/common.sh@345 -- # : 1 00:05:02.689 05:05:16 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:02.689 05:05:16 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:02.689 05:05:16 env -- scripts/common.sh@365 -- # decimal 1 00:05:02.689 05:05:16 env -- scripts/common.sh@353 -- # local d=1 00:05:02.689 05:05:16 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:02.689 05:05:16 env -- scripts/common.sh@355 -- # echo 1 00:05:02.689 05:05:16 env -- scripts/common.sh@365 -- # ver1[v]=1 00:05:02.689 05:05:16 env -- scripts/common.sh@366 -- # decimal 2 00:05:02.689 05:05:16 env -- scripts/common.sh@353 -- # local d=2 00:05:02.689 05:05:16 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:02.689 05:05:16 env -- scripts/common.sh@355 -- # echo 2 00:05:02.689 05:05:16 env -- scripts/common.sh@366 -- # ver2[v]=2 00:05:02.689 05:05:16 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:02.689 05:05:16 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:02.689 05:05:16 env -- scripts/common.sh@368 -- # return 0 00:05:02.689 05:05:16 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:02.689 05:05:16 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:02.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.689 --rc genhtml_branch_coverage=1 00:05:02.689 --rc genhtml_function_coverage=1 00:05:02.689 --rc genhtml_legend=1 00:05:02.689 --rc geninfo_all_blocks=1 00:05:02.689 --rc geninfo_unexecuted_blocks=1 00:05:02.689 00:05:02.689 ' 00:05:02.689 05:05:16 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:02.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.689 --rc genhtml_branch_coverage=1 00:05:02.689 --rc genhtml_function_coverage=1 00:05:02.689 --rc genhtml_legend=1 00:05:02.689 --rc geninfo_all_blocks=1 00:05:02.689 --rc geninfo_unexecuted_blocks=1 00:05:02.689 00:05:02.689 ' 00:05:02.689 05:05:16 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:02.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.689 --rc genhtml_branch_coverage=1 00:05:02.689 --rc genhtml_function_coverage=1 00:05:02.689 --rc genhtml_legend=1 00:05:02.689 --rc geninfo_all_blocks=1 00:05:02.689 --rc geninfo_unexecuted_blocks=1 00:05:02.689 00:05:02.689 ' 00:05:02.689 05:05:16 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:02.689 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:02.689 --rc genhtml_branch_coverage=1 00:05:02.689 --rc genhtml_function_coverage=1 00:05:02.689 --rc genhtml_legend=1 00:05:02.689 --rc geninfo_all_blocks=1 00:05:02.689 --rc geninfo_unexecuted_blocks=1 00:05:02.689 00:05:02.689 ' 00:05:02.689 05:05:16 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:02.689 05:05:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.689 05:05:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.689 05:05:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.689 ************************************ 00:05:02.689 START TEST env_memory 00:05:02.689 ************************************ 00:05:02.689 05:05:16 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:05:02.689 00:05:02.689 00:05:02.689 CUnit - A unit testing framework for C - Version 2.1-3 00:05:02.689 http://cunit.sourceforge.net/ 00:05:02.689 00:05:02.689 00:05:02.689 Suite: memory 00:05:02.950 Test: alloc and free memory map ...[2024-12-15 05:05:16.382721] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:05:02.950 passed 00:05:02.950 Test: mem map translation ...[2024-12-15 05:05:16.401685] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:05:02.950 [2024-12-15 05:05:16.401705] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:05:02.950 [2024-12-15 05:05:16.401740] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:05:02.950 [2024-12-15 05:05:16.401746] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:05:02.950 passed 00:05:02.950 Test: mem map registration ...[2024-12-15 05:05:16.438144] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:05:02.950 [2024-12-15 05:05:16.438168] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:05:02.950 passed 00:05:02.950 Test: mem map adjacent registrations ...passed 00:05:02.950 00:05:02.950 Run Summary: Type Total Ran Passed Failed Inactive 00:05:02.950 suites 1 1 n/a 0 0 00:05:02.950 tests 4 4 4 0 0 00:05:02.950 asserts 152 152 152 0 n/a 00:05:02.950 00:05:02.950 Elapsed time = 0.123 seconds 00:05:02.950 00:05:02.950 real 0m0.132s 00:05:02.950 user 0m0.124s 00:05:02.950 sys 0m0.007s 00:05:02.950 05:05:16 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:02.950 05:05:16 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:05:02.950 ************************************ 00:05:02.950 END TEST env_memory 00:05:02.950 ************************************ 00:05:02.950 05:05:16 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:02.950 05:05:16 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:02.950 05:05:16 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:02.950 05:05:16 env -- common/autotest_common.sh@10 -- # set +x 00:05:02.950 ************************************ 00:05:02.950 START TEST env_vtophys 00:05:02.950 ************************************ 00:05:02.950 05:05:16 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:05:02.950 EAL: lib.eal log level changed from notice to debug 00:05:02.950 EAL: Detected lcore 0 as core 0 on socket 0 00:05:02.950 EAL: Detected lcore 1 as core 1 on socket 0 00:05:02.950 EAL: Detected lcore 2 as core 2 on socket 0 00:05:02.950 EAL: Detected lcore 3 as core 3 on socket 0 00:05:02.950 EAL: Detected lcore 4 as core 4 on socket 0 00:05:02.950 EAL: Detected lcore 5 as core 5 on socket 0 00:05:02.950 EAL: Detected lcore 6 as core 6 on socket 0 00:05:02.950 EAL: Detected lcore 7 as core 8 on socket 0 00:05:02.950 EAL: Detected lcore 8 as core 9 on socket 0 00:05:02.950 EAL: Detected lcore 9 as core 10 on socket 0 00:05:02.950 EAL: Detected lcore 10 as core 11 on socket 0 00:05:02.950 EAL: Detected lcore 11 as core 12 on socket 0 00:05:02.950 EAL: Detected lcore 12 as core 13 on socket 0 00:05:02.950 EAL: Detected lcore 13 as core 16 on socket 0 00:05:02.950 EAL: Detected lcore 14 as core 17 on socket 0 00:05:02.950 EAL: Detected lcore 15 as core 18 on socket 0 00:05:02.950 EAL: Detected lcore 16 as core 19 on socket 0 00:05:02.950 EAL: Detected lcore 17 as core 20 on socket 0 00:05:02.950 EAL: Detected lcore 18 as core 21 on socket 0 00:05:02.950 EAL: Detected lcore 19 as core 25 on socket 0 00:05:02.950 EAL: Detected lcore 20 as core 26 on socket 0 00:05:02.950 EAL: Detected lcore 21 as core 27 on socket 0 00:05:02.950 EAL: Detected lcore 22 as core 28 on socket 0 00:05:02.950 EAL: Detected lcore 23 as core 29 on socket 0 00:05:02.950 EAL: Detected lcore 24 as core 0 on socket 1 00:05:02.950 EAL: Detected lcore 25 as core 1 on socket 1 00:05:02.950 EAL: Detected lcore 26 as core 2 on socket 1 00:05:02.950 EAL: Detected lcore 27 as core 3 on socket 1 00:05:02.950 EAL: Detected lcore 28 as core 4 on socket 1 00:05:02.950 EAL: Detected lcore 29 as core 5 on socket 1 00:05:02.950 EAL: Detected lcore 30 as core 6 on socket 1 00:05:02.950 EAL: Detected lcore 31 as core 8 on socket 1 00:05:02.950 EAL: Detected lcore 32 as core 9 on socket 1 00:05:02.950 EAL: Detected lcore 33 as core 10 on socket 1 00:05:02.950 EAL: Detected lcore 34 as core 11 on socket 1 00:05:02.950 EAL: Detected lcore 35 as core 12 on socket 1 00:05:02.950 EAL: Detected lcore 36 as core 13 on socket 1 00:05:02.950 EAL: Detected lcore 37 as core 16 on socket 1 00:05:02.950 EAL: Detected lcore 38 as core 17 on socket 1 00:05:02.950 EAL: Detected lcore 39 as core 18 on socket 1 00:05:02.950 EAL: Detected lcore 40 as core 19 on socket 1 00:05:02.950 EAL: Detected lcore 41 as core 20 on socket 1 00:05:02.950 EAL: Detected lcore 42 as core 21 on socket 1 00:05:02.950 EAL: Detected lcore 43 as core 25 on socket 1 00:05:02.950 EAL: Detected lcore 44 as core 26 on socket 1 00:05:02.950 EAL: Detected lcore 45 as core 27 on socket 1 00:05:02.950 EAL: Detected lcore 46 as core 28 on socket 1 00:05:02.950 EAL: Detected lcore 47 as core 29 on socket 1 00:05:02.950 EAL: Detected lcore 48 as core 0 on socket 0 00:05:02.950 EAL: Detected lcore 49 as core 1 on socket 0 00:05:02.950 EAL: Detected lcore 50 as core 2 on socket 0 00:05:02.950 EAL: Detected lcore 51 as core 3 on socket 0 00:05:02.950 EAL: Detected lcore 52 as core 4 on socket 0 00:05:02.950 EAL: Detected lcore 53 as core 5 on socket 0 00:05:02.950 EAL: Detected lcore 54 as core 6 on socket 0 00:05:02.950 EAL: Detected lcore 55 as core 8 on socket 0 00:05:02.950 EAL: Detected lcore 56 as core 9 on socket 0 00:05:02.950 EAL: Detected lcore 57 as core 10 on socket 0 00:05:02.950 EAL: Detected lcore 58 as core 11 on socket 0 00:05:02.950 EAL: Detected lcore 59 as core 12 on socket 0 00:05:02.950 EAL: Detected lcore 60 as core 13 on socket 0 00:05:02.950 EAL: Detected lcore 61 as core 16 on socket 0 00:05:02.950 EAL: Detected lcore 62 as core 17 on socket 0 00:05:02.950 EAL: Detected lcore 63 as core 18 on socket 0 00:05:02.950 EAL: Detected lcore 64 as core 19 on socket 0 00:05:02.950 EAL: Detected lcore 65 as core 20 on socket 0 00:05:02.950 EAL: Detected lcore 66 as core 21 on socket 0 00:05:02.950 EAL: Detected lcore 67 as core 25 on socket 0 00:05:02.950 EAL: Detected lcore 68 as core 26 on socket 0 00:05:02.950 EAL: Detected lcore 69 as core 27 on socket 0 00:05:02.950 EAL: Detected lcore 70 as core 28 on socket 0 00:05:02.951 EAL: Detected lcore 71 as core 29 on socket 0 00:05:02.951 EAL: Detected lcore 72 as core 0 on socket 1 00:05:02.951 EAL: Detected lcore 73 as core 1 on socket 1 00:05:02.951 EAL: Detected lcore 74 as core 2 on socket 1 00:05:02.951 EAL: Detected lcore 75 as core 3 on socket 1 00:05:02.951 EAL: Detected lcore 76 as core 4 on socket 1 00:05:02.951 EAL: Detected lcore 77 as core 5 on socket 1 00:05:02.951 EAL: Detected lcore 78 as core 6 on socket 1 00:05:02.951 EAL: Detected lcore 79 as core 8 on socket 1 00:05:02.951 EAL: Detected lcore 80 as core 9 on socket 1 00:05:02.951 EAL: Detected lcore 81 as core 10 on socket 1 00:05:02.951 EAL: Detected lcore 82 as core 11 on socket 1 00:05:02.951 EAL: Detected lcore 83 as core 12 on socket 1 00:05:02.951 EAL: Detected lcore 84 as core 13 on socket 1 00:05:02.951 EAL: Detected lcore 85 as core 16 on socket 1 00:05:02.951 EAL: Detected lcore 86 as core 17 on socket 1 00:05:02.951 EAL: Detected lcore 87 as core 18 on socket 1 00:05:02.951 EAL: Detected lcore 88 as core 19 on socket 1 00:05:02.951 EAL: Detected lcore 89 as core 20 on socket 1 00:05:02.951 EAL: Detected lcore 90 as core 21 on socket 1 00:05:02.951 EAL: Detected lcore 91 as core 25 on socket 1 00:05:02.951 EAL: Detected lcore 92 as core 26 on socket 1 00:05:02.951 EAL: Detected lcore 93 as core 27 on socket 1 00:05:02.951 EAL: Detected lcore 94 as core 28 on socket 1 00:05:02.951 EAL: Detected lcore 95 as core 29 on socket 1 00:05:02.951 EAL: Maximum logical cores by configuration: 128 00:05:02.951 EAL: Detected CPU lcores: 96 00:05:02.951 EAL: Detected NUMA nodes: 2 00:05:02.951 EAL: Checking presence of .so 'librte_eal.so.23.0' 00:05:02.951 EAL: Detected shared linkage of DPDK 00:05:02.951 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so.23.0 00:05:02.951 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so.23.0 00:05:02.951 EAL: Registered [vdev] bus. 00:05:02.951 EAL: bus.vdev log level changed from disabled to notice 00:05:02.951 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so.23.0 00:05:02.951 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so.23.0 00:05:02.951 EAL: pmd.net.i40e.init log level changed from disabled to notice 00:05:02.951 EAL: pmd.net.i40e.driver log level changed from disabled to notice 00:05:02.951 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_pci.so 00:05:02.951 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_bus_vdev.so 00:05:02.951 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_mempool_ring.so 00:05:02.951 EAL: open shared lib /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib/dpdk/pmds-23.0/librte_net_i40e.so 00:05:02.951 EAL: No shared files mode enabled, IPC will be disabled 00:05:02.951 EAL: No shared files mode enabled, IPC is disabled 00:05:02.951 EAL: Bus pci wants IOVA as 'DC' 00:05:02.951 EAL: Bus vdev wants IOVA as 'DC' 00:05:02.951 EAL: Buses did not request a specific IOVA mode. 00:05:02.951 EAL: IOMMU is available, selecting IOVA as VA mode. 00:05:02.951 EAL: Selected IOVA mode 'VA' 00:05:02.951 EAL: Probing VFIO support... 00:05:02.951 EAL: IOMMU type 1 (Type 1) is supported 00:05:02.951 EAL: IOMMU type 7 (sPAPR) is not supported 00:05:02.951 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:05:02.951 EAL: VFIO support initialized 00:05:02.951 EAL: Ask a virtual area of 0x2e000 bytes 00:05:02.951 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:05:02.951 EAL: Setting up physically contiguous memory... 00:05:02.951 EAL: Setting maximum number of open files to 524288 00:05:02.951 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:05:02.951 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:05:02.951 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:05:02.951 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.951 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:05:02.951 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.951 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.951 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:05:02.951 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:05:02.951 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.951 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:05:02.951 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.951 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.951 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:05:02.951 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:05:02.951 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.951 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:05:02.951 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.951 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.951 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:05:02.951 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:05:02.951 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.951 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:05:02.951 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:05:02.951 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.951 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:05:02.951 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:05:02.951 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:05:02.951 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.951 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:05:02.951 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:02.951 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.951 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:05:02.951 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:05:02.951 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.951 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:05:02.951 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:02.951 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.951 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:05:02.951 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:05:02.951 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.951 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:05:02.951 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:02.951 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.951 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:05:02.951 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:05:02.951 EAL: Ask a virtual area of 0x61000 bytes 00:05:02.951 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:05:02.951 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:05:02.951 EAL: Ask a virtual area of 0x400000000 bytes 00:05:02.951 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:05:02.951 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:05:02.951 EAL: Hugepages will be freed exactly as allocated. 00:05:02.951 EAL: No shared files mode enabled, IPC is disabled 00:05:02.951 EAL: No shared files mode enabled, IPC is disabled 00:05:02.951 EAL: TSC frequency is ~2100000 KHz 00:05:02.951 EAL: Main lcore 0 is ready (tid=7f568781fa00;cpuset=[0]) 00:05:02.951 EAL: Trying to obtain current memory policy. 00:05:02.951 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:02.951 EAL: Restoring previous memory policy: 0 00:05:02.951 EAL: request: mp_malloc_sync 00:05:02.951 EAL: No shared files mode enabled, IPC is disabled 00:05:02.951 EAL: Heap on socket 0 was expanded by 2MB 00:05:02.951 EAL: PCI device 0000:3d:00.0 on NUMA socket 0 00:05:02.951 EAL: probe driver: 8086:37d2 net_i40e 00:05:02.951 EAL: Not managed by a supported kernel driver, skipped 00:05:02.951 EAL: PCI device 0000:3d:00.1 on NUMA socket 0 00:05:02.951 EAL: probe driver: 8086:37d2 net_i40e 00:05:02.951 EAL: Not managed by a supported kernel driver, skipped 00:05:02.951 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: No PCI address specified using 'addr=' in: bus=pci 00:05:03.212 EAL: Mem event callback 'spdk:(nil)' registered 00:05:03.212 00:05:03.212 00:05:03.212 CUnit - A unit testing framework for C - Version 2.1-3 00:05:03.212 http://cunit.sourceforge.net/ 00:05:03.212 00:05:03.212 00:05:03.212 Suite: components_suite 00:05:03.212 Test: vtophys_malloc_test ...passed 00:05:03.212 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:05:03.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.212 EAL: Restoring previous memory policy: 4 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was expanded by 4MB 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was shrunk by 4MB 00:05:03.212 EAL: Trying to obtain current memory policy. 00:05:03.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.212 EAL: Restoring previous memory policy: 4 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was expanded by 6MB 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was shrunk by 6MB 00:05:03.212 EAL: Trying to obtain current memory policy. 00:05:03.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.212 EAL: Restoring previous memory policy: 4 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was expanded by 10MB 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was shrunk by 10MB 00:05:03.212 EAL: Trying to obtain current memory policy. 00:05:03.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.212 EAL: Restoring previous memory policy: 4 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was expanded by 18MB 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was shrunk by 18MB 00:05:03.212 EAL: Trying to obtain current memory policy. 00:05:03.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.212 EAL: Restoring previous memory policy: 4 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was expanded by 34MB 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was shrunk by 34MB 00:05:03.212 EAL: Trying to obtain current memory policy. 00:05:03.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.212 EAL: Restoring previous memory policy: 4 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was expanded by 66MB 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was shrunk by 66MB 00:05:03.212 EAL: Trying to obtain current memory policy. 00:05:03.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.212 EAL: Restoring previous memory policy: 4 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was expanded by 130MB 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was shrunk by 130MB 00:05:03.212 EAL: Trying to obtain current memory policy. 00:05:03.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.212 EAL: Restoring previous memory policy: 4 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was expanded by 258MB 00:05:03.212 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.212 EAL: request: mp_malloc_sync 00:05:03.212 EAL: No shared files mode enabled, IPC is disabled 00:05:03.212 EAL: Heap on socket 0 was shrunk by 258MB 00:05:03.212 EAL: Trying to obtain current memory policy. 00:05:03.212 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.473 EAL: Restoring previous memory policy: 4 00:05:03.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.473 EAL: request: mp_malloc_sync 00:05:03.473 EAL: No shared files mode enabled, IPC is disabled 00:05:03.473 EAL: Heap on socket 0 was expanded by 514MB 00:05:03.473 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.473 EAL: request: mp_malloc_sync 00:05:03.473 EAL: No shared files mode enabled, IPC is disabled 00:05:03.473 EAL: Heap on socket 0 was shrunk by 514MB 00:05:03.473 EAL: Trying to obtain current memory policy. 00:05:03.473 EAL: Setting policy MPOL_PREFERRED for socket 0 00:05:03.733 EAL: Restoring previous memory policy: 4 00:05:03.733 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.733 EAL: request: mp_malloc_sync 00:05:03.733 EAL: No shared files mode enabled, IPC is disabled 00:05:03.733 EAL: Heap on socket 0 was expanded by 1026MB 00:05:03.993 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.994 EAL: request: mp_malloc_sync 00:05:03.994 EAL: No shared files mode enabled, IPC is disabled 00:05:03.994 EAL: Heap on socket 0 was shrunk by 1026MB 00:05:03.994 passed 00:05:03.994 00:05:03.994 Run Summary: Type Total Ran Passed Failed Inactive 00:05:03.994 suites 1 1 n/a 0 0 00:05:03.994 tests 2 2 2 0 0 00:05:03.994 asserts 497 497 497 0 n/a 00:05:03.994 00:05:03.994 Elapsed time = 0.969 seconds 00:05:03.994 EAL: Calling mem event callback 'spdk:(nil)' 00:05:03.994 EAL: request: mp_malloc_sync 00:05:03.994 EAL: No shared files mode enabled, IPC is disabled 00:05:03.994 EAL: Heap on socket 0 was shrunk by 2MB 00:05:03.994 EAL: No shared files mode enabled, IPC is disabled 00:05:03.994 EAL: No shared files mode enabled, IPC is disabled 00:05:03.994 EAL: No shared files mode enabled, IPC is disabled 00:05:03.994 00:05:03.994 real 0m1.095s 00:05:03.994 user 0m0.641s 00:05:03.994 sys 0m0.428s 00:05:03.994 05:05:17 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:03.994 05:05:17 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:05:03.994 ************************************ 00:05:03.994 END TEST env_vtophys 00:05:03.994 ************************************ 00:05:04.253 05:05:17 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:04.253 05:05:17 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:04.253 05:05:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.253 05:05:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.253 ************************************ 00:05:04.253 START TEST env_pci 00:05:04.253 ************************************ 00:05:04.253 05:05:17 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:05:04.253 00:05:04.253 00:05:04.253 CUnit - A unit testing framework for C - Version 2.1-3 00:05:04.253 http://cunit.sourceforge.net/ 00:05:04.253 00:05:04.253 00:05:04.253 Suite: pci 00:05:04.253 Test: pci_hook ...[2024-12-15 05:05:17.732352] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 103250 has claimed it 00:05:04.253 EAL: Cannot find device (10000:00:01.0) 00:05:04.253 EAL: Failed to attach device on primary process 00:05:04.253 passed 00:05:04.253 00:05:04.253 Run Summary: Type Total Ran Passed Failed Inactive 00:05:04.253 suites 1 1 n/a 0 0 00:05:04.253 tests 1 1 1 0 0 00:05:04.253 asserts 25 25 25 0 n/a 00:05:04.253 00:05:04.253 Elapsed time = 0.027 seconds 00:05:04.253 00:05:04.253 real 0m0.045s 00:05:04.253 user 0m0.010s 00:05:04.253 sys 0m0.034s 00:05:04.254 05:05:17 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:04.254 05:05:17 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:05:04.254 ************************************ 00:05:04.254 END TEST env_pci 00:05:04.254 ************************************ 00:05:04.254 05:05:17 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:05:04.254 05:05:17 env -- env/env.sh@15 -- # uname 00:05:04.254 05:05:17 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:05:04.254 05:05:17 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:05:04.254 05:05:17 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.254 05:05:17 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:04.254 05:05:17 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:04.254 05:05:17 env -- common/autotest_common.sh@10 -- # set +x 00:05:04.254 ************************************ 00:05:04.254 START TEST env_dpdk_post_init 00:05:04.254 ************************************ 00:05:04.254 05:05:17 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:05:04.254 EAL: Detected CPU lcores: 96 00:05:04.254 EAL: Detected NUMA nodes: 2 00:05:04.254 EAL: Detected shared linkage of DPDK 00:05:04.254 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:04.254 EAL: Selected IOVA mode 'VA' 00:05:04.254 EAL: VFIO support initialized 00:05:04.254 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:04.513 EAL: Using IOMMU type 1 (Type 1) 00:05:04.513 EAL: Ignore mapping IO port bar(1) 00:05:04.513 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:05:04.513 EAL: Ignore mapping IO port bar(1) 00:05:04.513 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:05:04.513 EAL: Ignore mapping IO port bar(1) 00:05:04.513 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:05:04.513 EAL: Ignore mapping IO port bar(1) 00:05:04.513 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:05:04.513 EAL: Ignore mapping IO port bar(1) 00:05:04.513 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:05:04.513 EAL: Ignore mapping IO port bar(1) 00:05:04.513 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:05:04.513 EAL: Ignore mapping IO port bar(1) 00:05:04.513 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:05:04.513 EAL: Ignore mapping IO port bar(1) 00:05:04.513 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:05:05.454 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:05:05.454 EAL: Ignore mapping IO port bar(1) 00:05:05.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:05:05.454 EAL: Ignore mapping IO port bar(1) 00:05:05.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:05:05.454 EAL: Ignore mapping IO port bar(1) 00:05:05.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:05:05.454 EAL: Ignore mapping IO port bar(1) 00:05:05.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:05:05.454 EAL: Ignore mapping IO port bar(1) 00:05:05.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:05:05.454 EAL: Ignore mapping IO port bar(1) 00:05:05.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:05:05.454 EAL: Ignore mapping IO port bar(1) 00:05:05.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:05:05.454 EAL: Ignore mapping IO port bar(1) 00:05:05.454 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:05:08.749 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:05:08.749 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:05:08.749 Starting DPDK initialization... 00:05:08.749 Starting SPDK post initialization... 00:05:08.749 SPDK NVMe probe 00:05:08.749 Attaching to 0000:5e:00.0 00:05:08.749 Attached to 0000:5e:00.0 00:05:08.749 Cleaning up... 00:05:08.749 00:05:08.749 real 0m4.376s 00:05:08.749 user 0m3.272s 00:05:08.749 sys 0m0.175s 00:05:08.749 05:05:22 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.749 05:05:22 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:05:08.749 ************************************ 00:05:08.749 END TEST env_dpdk_post_init 00:05:08.749 ************************************ 00:05:08.749 05:05:22 env -- env/env.sh@26 -- # uname 00:05:08.749 05:05:22 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:05:08.749 05:05:22 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.749 05:05:22 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.749 05:05:22 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.749 05:05:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.749 ************************************ 00:05:08.749 START TEST env_mem_callbacks 00:05:08.749 ************************************ 00:05:08.749 05:05:22 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:05:08.749 EAL: Detected CPU lcores: 96 00:05:08.749 EAL: Detected NUMA nodes: 2 00:05:08.749 EAL: Detected shared linkage of DPDK 00:05:08.749 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:05:08.749 EAL: Selected IOVA mode 'VA' 00:05:08.749 EAL: VFIO support initialized 00:05:08.749 TELEMETRY: No legacy callbacks, legacy socket not created 00:05:08.749 00:05:08.749 00:05:08.749 CUnit - A unit testing framework for C - Version 2.1-3 00:05:08.749 http://cunit.sourceforge.net/ 00:05:08.749 00:05:08.749 00:05:08.749 Suite: memory 00:05:08.749 Test: test ... 00:05:08.749 register 0x200000200000 2097152 00:05:08.749 malloc 3145728 00:05:08.749 register 0x200000400000 4194304 00:05:08.749 buf 0x200000500000 len 3145728 PASSED 00:05:08.749 malloc 64 00:05:08.749 buf 0x2000004fff40 len 64 PASSED 00:05:08.749 malloc 4194304 00:05:08.749 register 0x200000800000 6291456 00:05:08.749 buf 0x200000a00000 len 4194304 PASSED 00:05:08.749 free 0x200000500000 3145728 00:05:08.749 free 0x2000004fff40 64 00:05:08.749 unregister 0x200000400000 4194304 PASSED 00:05:08.749 free 0x200000a00000 4194304 00:05:08.749 unregister 0x200000800000 6291456 PASSED 00:05:08.749 malloc 8388608 00:05:08.749 register 0x200000400000 10485760 00:05:08.749 buf 0x200000600000 len 8388608 PASSED 00:05:08.749 free 0x200000600000 8388608 00:05:08.749 unregister 0x200000400000 10485760 PASSED 00:05:08.749 passed 00:05:08.749 00:05:08.749 Run Summary: Type Total Ran Passed Failed Inactive 00:05:08.749 suites 1 1 n/a 0 0 00:05:08.749 tests 1 1 1 0 0 00:05:08.749 asserts 15 15 15 0 n/a 00:05:08.749 00:05:08.749 Elapsed time = 0.004 seconds 00:05:08.749 00:05:08.749 real 0m0.049s 00:05:08.749 user 0m0.013s 00:05:08.749 sys 0m0.036s 00:05:08.749 05:05:22 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.749 05:05:22 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:05:08.749 ************************************ 00:05:08.749 END TEST env_mem_callbacks 00:05:08.749 ************************************ 00:05:08.749 00:05:08.749 real 0m6.242s 00:05:08.749 user 0m4.304s 00:05:08.749 sys 0m1.019s 00:05:08.749 05:05:22 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:08.749 05:05:22 env -- common/autotest_common.sh@10 -- # set +x 00:05:08.749 ************************************ 00:05:08.749 END TEST env 00:05:08.749 ************************************ 00:05:08.749 05:05:22 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:08.749 05:05:22 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:08.749 05:05:22 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:08.749 05:05:22 -- common/autotest_common.sh@10 -- # set +x 00:05:09.009 ************************************ 00:05:09.009 START TEST rpc 00:05:09.009 ************************************ 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:05:09.010 * Looking for test storage... 00:05:09.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:09.010 05:05:22 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:09.010 05:05:22 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:09.010 05:05:22 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:09.010 05:05:22 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:09.010 05:05:22 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:09.010 05:05:22 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:09.010 05:05:22 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:09.010 05:05:22 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:09.010 05:05:22 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:09.010 05:05:22 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:09.010 05:05:22 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:09.010 05:05:22 rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:09.010 05:05:22 rpc -- scripts/common.sh@345 -- # : 1 00:05:09.010 05:05:22 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:09.010 05:05:22 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:09.010 05:05:22 rpc -- scripts/common.sh@365 -- # decimal 1 00:05:09.010 05:05:22 rpc -- scripts/common.sh@353 -- # local d=1 00:05:09.010 05:05:22 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:09.010 05:05:22 rpc -- scripts/common.sh@355 -- # echo 1 00:05:09.010 05:05:22 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:09.010 05:05:22 rpc -- scripts/common.sh@366 -- # decimal 2 00:05:09.010 05:05:22 rpc -- scripts/common.sh@353 -- # local d=2 00:05:09.010 05:05:22 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:09.010 05:05:22 rpc -- scripts/common.sh@355 -- # echo 2 00:05:09.010 05:05:22 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:09.010 05:05:22 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:09.010 05:05:22 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:09.010 05:05:22 rpc -- scripts/common.sh@368 -- # return 0 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:09.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.010 --rc genhtml_branch_coverage=1 00:05:09.010 --rc genhtml_function_coverage=1 00:05:09.010 --rc genhtml_legend=1 00:05:09.010 --rc geninfo_all_blocks=1 00:05:09.010 --rc geninfo_unexecuted_blocks=1 00:05:09.010 00:05:09.010 ' 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:09.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.010 --rc genhtml_branch_coverage=1 00:05:09.010 --rc genhtml_function_coverage=1 00:05:09.010 --rc genhtml_legend=1 00:05:09.010 --rc geninfo_all_blocks=1 00:05:09.010 --rc geninfo_unexecuted_blocks=1 00:05:09.010 00:05:09.010 ' 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:09.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.010 --rc genhtml_branch_coverage=1 00:05:09.010 --rc genhtml_function_coverage=1 00:05:09.010 --rc genhtml_legend=1 00:05:09.010 --rc geninfo_all_blocks=1 00:05:09.010 --rc geninfo_unexecuted_blocks=1 00:05:09.010 00:05:09.010 ' 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:09.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:09.010 --rc genhtml_branch_coverage=1 00:05:09.010 --rc genhtml_function_coverage=1 00:05:09.010 --rc genhtml_legend=1 00:05:09.010 --rc geninfo_all_blocks=1 00:05:09.010 --rc geninfo_unexecuted_blocks=1 00:05:09.010 00:05:09.010 ' 00:05:09.010 05:05:22 rpc -- rpc/rpc.sh@65 -- # spdk_pid=104088 00:05:09.010 05:05:22 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:09.010 05:05:22 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:05:09.010 05:05:22 rpc -- rpc/rpc.sh@67 -- # waitforlisten 104088 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@835 -- # '[' -z 104088 ']' 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:09.010 05:05:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.010 [2024-12-15 05:05:22.671745] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:09.010 [2024-12-15 05:05:22.671791] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104088 ] 00:05:09.271 [2024-12-15 05:05:22.744714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.271 [2024-12-15 05:05:22.767174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:05:09.271 [2024-12-15 05:05:22.767210] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 104088' to capture a snapshot of events at runtime. 00:05:09.271 [2024-12-15 05:05:22.767217] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:09.271 [2024-12-15 05:05:22.767224] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:09.271 [2024-12-15 05:05:22.767229] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid104088 for offline analysis/debug. 00:05:09.271 [2024-12-15 05:05:22.767695] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.531 05:05:22 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:09.531 05:05:22 rpc -- common/autotest_common.sh@868 -- # return 0 00:05:09.531 05:05:22 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:09.531 05:05:22 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:09.531 05:05:22 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:05:09.531 05:05:22 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:05:09.531 05:05:22 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.531 05:05:22 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.531 05:05:22 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.531 ************************************ 00:05:09.531 START TEST rpc_integrity 00:05:09.531 ************************************ 00:05:09.531 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:09.531 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:09.531 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.531 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.531 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.531 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:09.531 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:09.531 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:09.531 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:09.531 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.531 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.531 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.531 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:05:09.531 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:09.531 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.531 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.531 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.531 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:09.531 { 00:05:09.531 "name": "Malloc0", 00:05:09.531 "aliases": [ 00:05:09.531 "07392544-8dec-4ae6-9cae-614eae35db32" 00:05:09.531 ], 00:05:09.531 "product_name": "Malloc disk", 00:05:09.531 "block_size": 512, 00:05:09.531 "num_blocks": 16384, 00:05:09.531 "uuid": "07392544-8dec-4ae6-9cae-614eae35db32", 00:05:09.531 "assigned_rate_limits": { 00:05:09.531 "rw_ios_per_sec": 0, 00:05:09.531 "rw_mbytes_per_sec": 0, 00:05:09.531 "r_mbytes_per_sec": 0, 00:05:09.531 "w_mbytes_per_sec": 0 00:05:09.531 }, 00:05:09.531 "claimed": false, 00:05:09.531 "zoned": false, 00:05:09.531 "supported_io_types": { 00:05:09.531 "read": true, 00:05:09.531 "write": true, 00:05:09.531 "unmap": true, 00:05:09.531 "flush": true, 00:05:09.531 "reset": true, 00:05:09.531 "nvme_admin": false, 00:05:09.531 "nvme_io": false, 00:05:09.531 "nvme_io_md": false, 00:05:09.531 "write_zeroes": true, 00:05:09.531 "zcopy": true, 00:05:09.531 "get_zone_info": false, 00:05:09.531 "zone_management": false, 00:05:09.531 "zone_append": false, 00:05:09.531 "compare": false, 00:05:09.531 "compare_and_write": false, 00:05:09.531 "abort": true, 00:05:09.531 "seek_hole": false, 00:05:09.531 "seek_data": false, 00:05:09.531 "copy": true, 00:05:09.531 "nvme_iov_md": false 00:05:09.531 }, 00:05:09.531 "memory_domains": [ 00:05:09.531 { 00:05:09.531 "dma_device_id": "system", 00:05:09.531 "dma_device_type": 1 00:05:09.531 }, 00:05:09.531 { 00:05:09.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.531 "dma_device_type": 2 00:05:09.531 } 00:05:09.531 ], 00:05:09.531 "driver_specific": {} 00:05:09.532 } 00:05:09.532 ]' 00:05:09.532 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:09.532 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:09.532 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:05:09.532 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.532 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.532 [2024-12-15 05:05:23.135628] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:05:09.532 [2024-12-15 05:05:23.135655] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:09.532 [2024-12-15 05:05:23.135669] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x239ca00 00:05:09.532 [2024-12-15 05:05:23.135675] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:09.532 [2024-12-15 05:05:23.136724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:09.532 [2024-12-15 05:05:23.136744] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:09.532 Passthru0 00:05:09.532 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.532 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:09.532 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.532 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.532 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.532 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:09.532 { 00:05:09.532 "name": "Malloc0", 00:05:09.532 "aliases": [ 00:05:09.532 "07392544-8dec-4ae6-9cae-614eae35db32" 00:05:09.532 ], 00:05:09.532 "product_name": "Malloc disk", 00:05:09.532 "block_size": 512, 00:05:09.532 "num_blocks": 16384, 00:05:09.532 "uuid": "07392544-8dec-4ae6-9cae-614eae35db32", 00:05:09.532 "assigned_rate_limits": { 00:05:09.532 "rw_ios_per_sec": 0, 00:05:09.532 "rw_mbytes_per_sec": 0, 00:05:09.532 "r_mbytes_per_sec": 0, 00:05:09.532 "w_mbytes_per_sec": 0 00:05:09.532 }, 00:05:09.532 "claimed": true, 00:05:09.532 "claim_type": "exclusive_write", 00:05:09.532 "zoned": false, 00:05:09.532 "supported_io_types": { 00:05:09.532 "read": true, 00:05:09.532 "write": true, 00:05:09.532 "unmap": true, 00:05:09.532 "flush": true, 00:05:09.532 "reset": true, 00:05:09.532 "nvme_admin": false, 00:05:09.532 "nvme_io": false, 00:05:09.532 "nvme_io_md": false, 00:05:09.532 "write_zeroes": true, 00:05:09.532 "zcopy": true, 00:05:09.532 "get_zone_info": false, 00:05:09.532 "zone_management": false, 00:05:09.532 "zone_append": false, 00:05:09.532 "compare": false, 00:05:09.532 "compare_and_write": false, 00:05:09.532 "abort": true, 00:05:09.532 "seek_hole": false, 00:05:09.532 "seek_data": false, 00:05:09.532 "copy": true, 00:05:09.532 "nvme_iov_md": false 00:05:09.532 }, 00:05:09.532 "memory_domains": [ 00:05:09.532 { 00:05:09.532 "dma_device_id": "system", 00:05:09.532 "dma_device_type": 1 00:05:09.532 }, 00:05:09.532 { 00:05:09.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.532 "dma_device_type": 2 00:05:09.532 } 00:05:09.532 ], 00:05:09.532 "driver_specific": {} 00:05:09.532 }, 00:05:09.532 { 00:05:09.532 "name": "Passthru0", 00:05:09.532 "aliases": [ 00:05:09.532 "949eca6a-8538-5009-ae67-10ee0d1d8d24" 00:05:09.532 ], 00:05:09.532 "product_name": "passthru", 00:05:09.532 "block_size": 512, 00:05:09.532 "num_blocks": 16384, 00:05:09.532 "uuid": "949eca6a-8538-5009-ae67-10ee0d1d8d24", 00:05:09.532 "assigned_rate_limits": { 00:05:09.532 "rw_ios_per_sec": 0, 00:05:09.532 "rw_mbytes_per_sec": 0, 00:05:09.532 "r_mbytes_per_sec": 0, 00:05:09.532 "w_mbytes_per_sec": 0 00:05:09.532 }, 00:05:09.532 "claimed": false, 00:05:09.532 "zoned": false, 00:05:09.532 "supported_io_types": { 00:05:09.532 "read": true, 00:05:09.532 "write": true, 00:05:09.532 "unmap": true, 00:05:09.532 "flush": true, 00:05:09.532 "reset": true, 00:05:09.532 "nvme_admin": false, 00:05:09.532 "nvme_io": false, 00:05:09.532 "nvme_io_md": false, 00:05:09.532 "write_zeroes": true, 00:05:09.532 "zcopy": true, 00:05:09.532 "get_zone_info": false, 00:05:09.532 "zone_management": false, 00:05:09.532 "zone_append": false, 00:05:09.532 "compare": false, 00:05:09.532 "compare_and_write": false, 00:05:09.532 "abort": true, 00:05:09.532 "seek_hole": false, 00:05:09.532 "seek_data": false, 00:05:09.532 "copy": true, 00:05:09.532 "nvme_iov_md": false 00:05:09.532 }, 00:05:09.532 "memory_domains": [ 00:05:09.532 { 00:05:09.532 "dma_device_id": "system", 00:05:09.532 "dma_device_type": 1 00:05:09.532 }, 00:05:09.532 { 00:05:09.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.532 "dma_device_type": 2 00:05:09.532 } 00:05:09.532 ], 00:05:09.532 "driver_specific": { 00:05:09.532 "passthru": { 00:05:09.532 "name": "Passthru0", 00:05:09.532 "base_bdev_name": "Malloc0" 00:05:09.532 } 00:05:09.532 } 00:05:09.532 } 00:05:09.532 ]' 00:05:09.532 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:09.532 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:09.532 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:09.532 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.532 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.793 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.793 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:05:09.793 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.793 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.793 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.793 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:09.793 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.793 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.793 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.793 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:09.793 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:09.793 05:05:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:09.793 00:05:09.793 real 0m0.265s 00:05:09.793 user 0m0.165s 00:05:09.793 sys 0m0.035s 00:05:09.793 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.793 05:05:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:09.793 ************************************ 00:05:09.793 END TEST rpc_integrity 00:05:09.793 ************************************ 00:05:09.793 05:05:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:05:09.793 05:05:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:09.793 05:05:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:09.793 05:05:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:09.793 ************************************ 00:05:09.793 START TEST rpc_plugins 00:05:09.793 ************************************ 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:05:09.793 05:05:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.793 05:05:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:05:09.793 05:05:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.793 05:05:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:05:09.793 { 00:05:09.793 "name": "Malloc1", 00:05:09.793 "aliases": [ 00:05:09.793 "dac108da-1bb1-42f0-a306-f822d2dd3d73" 00:05:09.793 ], 00:05:09.793 "product_name": "Malloc disk", 00:05:09.793 "block_size": 4096, 00:05:09.793 "num_blocks": 256, 00:05:09.793 "uuid": "dac108da-1bb1-42f0-a306-f822d2dd3d73", 00:05:09.793 "assigned_rate_limits": { 00:05:09.793 "rw_ios_per_sec": 0, 00:05:09.793 "rw_mbytes_per_sec": 0, 00:05:09.793 "r_mbytes_per_sec": 0, 00:05:09.793 "w_mbytes_per_sec": 0 00:05:09.793 }, 00:05:09.793 "claimed": false, 00:05:09.793 "zoned": false, 00:05:09.793 "supported_io_types": { 00:05:09.793 "read": true, 00:05:09.793 "write": true, 00:05:09.793 "unmap": true, 00:05:09.793 "flush": true, 00:05:09.793 "reset": true, 00:05:09.793 "nvme_admin": false, 00:05:09.793 "nvme_io": false, 00:05:09.793 "nvme_io_md": false, 00:05:09.793 "write_zeroes": true, 00:05:09.793 "zcopy": true, 00:05:09.793 "get_zone_info": false, 00:05:09.793 "zone_management": false, 00:05:09.793 "zone_append": false, 00:05:09.793 "compare": false, 00:05:09.793 "compare_and_write": false, 00:05:09.793 "abort": true, 00:05:09.793 "seek_hole": false, 00:05:09.793 "seek_data": false, 00:05:09.793 "copy": true, 00:05:09.793 "nvme_iov_md": false 00:05:09.793 }, 00:05:09.793 "memory_domains": [ 00:05:09.793 { 00:05:09.793 "dma_device_id": "system", 00:05:09.793 "dma_device_type": 1 00:05:09.793 }, 00:05:09.793 { 00:05:09.793 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:09.793 "dma_device_type": 2 00:05:09.793 } 00:05:09.793 ], 00:05:09.793 "driver_specific": {} 00:05:09.793 } 00:05:09.793 ]' 00:05:09.793 05:05:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:05:09.793 05:05:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:05:09.793 05:05:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.793 05:05:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:09.793 05:05:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:05:09.793 05:05:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:05:09.793 05:05:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:05:09.793 00:05:09.793 real 0m0.132s 00:05:09.793 user 0m0.078s 00:05:09.793 sys 0m0.018s 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:09.793 05:05:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:05:09.793 ************************************ 00:05:09.793 END TEST rpc_plugins 00:05:09.793 ************************************ 00:05:10.054 05:05:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:05:10.054 05:05:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.054 05:05:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.054 05:05:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.054 ************************************ 00:05:10.054 START TEST rpc_trace_cmd_test 00:05:10.054 ************************************ 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:05:10.054 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid104088", 00:05:10.054 "tpoint_group_mask": "0x8", 00:05:10.054 "iscsi_conn": { 00:05:10.054 "mask": "0x2", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "scsi": { 00:05:10.054 "mask": "0x4", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "bdev": { 00:05:10.054 "mask": "0x8", 00:05:10.054 "tpoint_mask": "0xffffffffffffffff" 00:05:10.054 }, 00:05:10.054 "nvmf_rdma": { 00:05:10.054 "mask": "0x10", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "nvmf_tcp": { 00:05:10.054 "mask": "0x20", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "ftl": { 00:05:10.054 "mask": "0x40", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "blobfs": { 00:05:10.054 "mask": "0x80", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "dsa": { 00:05:10.054 "mask": "0x200", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "thread": { 00:05:10.054 "mask": "0x400", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "nvme_pcie": { 00:05:10.054 "mask": "0x800", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "iaa": { 00:05:10.054 "mask": "0x1000", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "nvme_tcp": { 00:05:10.054 "mask": "0x2000", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "bdev_nvme": { 00:05:10.054 "mask": "0x4000", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "sock": { 00:05:10.054 "mask": "0x8000", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "blob": { 00:05:10.054 "mask": "0x10000", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "bdev_raid": { 00:05:10.054 "mask": "0x20000", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 }, 00:05:10.054 "scheduler": { 00:05:10.054 "mask": "0x40000", 00:05:10.054 "tpoint_mask": "0x0" 00:05:10.054 } 00:05:10.054 }' 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:05:10.054 05:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:05:10.314 05:05:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:05:10.314 00:05:10.314 real 0m0.230s 00:05:10.314 user 0m0.188s 00:05:10.314 sys 0m0.031s 00:05:10.314 05:05:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.314 05:05:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:05:10.314 ************************************ 00:05:10.314 END TEST rpc_trace_cmd_test 00:05:10.314 ************************************ 00:05:10.314 05:05:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:05:10.314 05:05:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:05:10.314 05:05:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:05:10.314 05:05:23 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:10.314 05:05:23 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:10.314 05:05:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.314 ************************************ 00:05:10.314 START TEST rpc_daemon_integrity 00:05:10.314 ************************************ 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:05:10.314 { 00:05:10.314 "name": "Malloc2", 00:05:10.314 "aliases": [ 00:05:10.314 "27d21e57-5098-4ce3-9476-de78b8e40de7" 00:05:10.314 ], 00:05:10.314 "product_name": "Malloc disk", 00:05:10.314 "block_size": 512, 00:05:10.314 "num_blocks": 16384, 00:05:10.314 "uuid": "27d21e57-5098-4ce3-9476-de78b8e40de7", 00:05:10.314 "assigned_rate_limits": { 00:05:10.314 "rw_ios_per_sec": 0, 00:05:10.314 "rw_mbytes_per_sec": 0, 00:05:10.314 "r_mbytes_per_sec": 0, 00:05:10.314 "w_mbytes_per_sec": 0 00:05:10.314 }, 00:05:10.314 "claimed": false, 00:05:10.314 "zoned": false, 00:05:10.314 "supported_io_types": { 00:05:10.314 "read": true, 00:05:10.314 "write": true, 00:05:10.314 "unmap": true, 00:05:10.314 "flush": true, 00:05:10.314 "reset": true, 00:05:10.314 "nvme_admin": false, 00:05:10.314 "nvme_io": false, 00:05:10.314 "nvme_io_md": false, 00:05:10.314 "write_zeroes": true, 00:05:10.314 "zcopy": true, 00:05:10.314 "get_zone_info": false, 00:05:10.314 "zone_management": false, 00:05:10.314 "zone_append": false, 00:05:10.314 "compare": false, 00:05:10.314 "compare_and_write": false, 00:05:10.314 "abort": true, 00:05:10.314 "seek_hole": false, 00:05:10.314 "seek_data": false, 00:05:10.314 "copy": true, 00:05:10.314 "nvme_iov_md": false 00:05:10.314 }, 00:05:10.314 "memory_domains": [ 00:05:10.314 { 00:05:10.314 "dma_device_id": "system", 00:05:10.314 "dma_device_type": 1 00:05:10.314 }, 00:05:10.314 { 00:05:10.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.314 "dma_device_type": 2 00:05:10.314 } 00:05:10.314 ], 00:05:10.314 "driver_specific": {} 00:05:10.314 } 00:05:10.314 ]' 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.314 [2024-12-15 05:05:23.973877] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:05:10.314 [2024-12-15 05:05:23.973903] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:10.314 [2024-12-15 05:05:23.973915] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x225aac0 00:05:10.314 [2024-12-15 05:05:23.973921] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:10.314 [2024-12-15 05:05:23.974849] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:10.314 [2024-12-15 05:05:23.974870] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:05:10.314 Passthru0 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.314 05:05:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.574 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:05:10.575 { 00:05:10.575 "name": "Malloc2", 00:05:10.575 "aliases": [ 00:05:10.575 "27d21e57-5098-4ce3-9476-de78b8e40de7" 00:05:10.575 ], 00:05:10.575 "product_name": "Malloc disk", 00:05:10.575 "block_size": 512, 00:05:10.575 "num_blocks": 16384, 00:05:10.575 "uuid": "27d21e57-5098-4ce3-9476-de78b8e40de7", 00:05:10.575 "assigned_rate_limits": { 00:05:10.575 "rw_ios_per_sec": 0, 00:05:10.575 "rw_mbytes_per_sec": 0, 00:05:10.575 "r_mbytes_per_sec": 0, 00:05:10.575 "w_mbytes_per_sec": 0 00:05:10.575 }, 00:05:10.575 "claimed": true, 00:05:10.575 "claim_type": "exclusive_write", 00:05:10.575 "zoned": false, 00:05:10.575 "supported_io_types": { 00:05:10.575 "read": true, 00:05:10.575 "write": true, 00:05:10.575 "unmap": true, 00:05:10.575 "flush": true, 00:05:10.575 "reset": true, 00:05:10.575 "nvme_admin": false, 00:05:10.575 "nvme_io": false, 00:05:10.575 "nvme_io_md": false, 00:05:10.575 "write_zeroes": true, 00:05:10.575 "zcopy": true, 00:05:10.575 "get_zone_info": false, 00:05:10.575 "zone_management": false, 00:05:10.575 "zone_append": false, 00:05:10.575 "compare": false, 00:05:10.575 "compare_and_write": false, 00:05:10.575 "abort": true, 00:05:10.575 "seek_hole": false, 00:05:10.575 "seek_data": false, 00:05:10.575 "copy": true, 00:05:10.575 "nvme_iov_md": false 00:05:10.575 }, 00:05:10.575 "memory_domains": [ 00:05:10.575 { 00:05:10.575 "dma_device_id": "system", 00:05:10.575 "dma_device_type": 1 00:05:10.575 }, 00:05:10.575 { 00:05:10.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.575 "dma_device_type": 2 00:05:10.575 } 00:05:10.575 ], 00:05:10.575 "driver_specific": {} 00:05:10.575 }, 00:05:10.575 { 00:05:10.575 "name": "Passthru0", 00:05:10.575 "aliases": [ 00:05:10.575 "7f4a7986-7103-5aba-b7df-922ff6c9b34a" 00:05:10.575 ], 00:05:10.575 "product_name": "passthru", 00:05:10.575 "block_size": 512, 00:05:10.575 "num_blocks": 16384, 00:05:10.575 "uuid": "7f4a7986-7103-5aba-b7df-922ff6c9b34a", 00:05:10.575 "assigned_rate_limits": { 00:05:10.575 "rw_ios_per_sec": 0, 00:05:10.575 "rw_mbytes_per_sec": 0, 00:05:10.575 "r_mbytes_per_sec": 0, 00:05:10.575 "w_mbytes_per_sec": 0 00:05:10.575 }, 00:05:10.575 "claimed": false, 00:05:10.575 "zoned": false, 00:05:10.575 "supported_io_types": { 00:05:10.575 "read": true, 00:05:10.575 "write": true, 00:05:10.575 "unmap": true, 00:05:10.575 "flush": true, 00:05:10.575 "reset": true, 00:05:10.575 "nvme_admin": false, 00:05:10.575 "nvme_io": false, 00:05:10.575 "nvme_io_md": false, 00:05:10.575 "write_zeroes": true, 00:05:10.575 "zcopy": true, 00:05:10.575 "get_zone_info": false, 00:05:10.575 "zone_management": false, 00:05:10.575 "zone_append": false, 00:05:10.575 "compare": false, 00:05:10.575 "compare_and_write": false, 00:05:10.575 "abort": true, 00:05:10.575 "seek_hole": false, 00:05:10.575 "seek_data": false, 00:05:10.575 "copy": true, 00:05:10.575 "nvme_iov_md": false 00:05:10.575 }, 00:05:10.575 "memory_domains": [ 00:05:10.575 { 00:05:10.575 "dma_device_id": "system", 00:05:10.575 "dma_device_type": 1 00:05:10.575 }, 00:05:10.575 { 00:05:10.575 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:05:10.575 "dma_device_type": 2 00:05:10.575 } 00:05:10.575 ], 00:05:10.575 "driver_specific": { 00:05:10.575 "passthru": { 00:05:10.575 "name": "Passthru0", 00:05:10.575 "base_bdev_name": "Malloc2" 00:05:10.575 } 00:05:10.575 } 00:05:10.575 } 00:05:10.575 ]' 00:05:10.575 05:05:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:05:10.575 00:05:10.575 real 0m0.276s 00:05:10.575 user 0m0.170s 00:05:10.575 sys 0m0.044s 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.575 05:05:24 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:05:10.575 ************************************ 00:05:10.575 END TEST rpc_daemon_integrity 00:05:10.575 ************************************ 00:05:10.575 05:05:24 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:05:10.575 05:05:24 rpc -- rpc/rpc.sh@84 -- # killprocess 104088 00:05:10.575 05:05:24 rpc -- common/autotest_common.sh@954 -- # '[' -z 104088 ']' 00:05:10.575 05:05:24 rpc -- common/autotest_common.sh@958 -- # kill -0 104088 00:05:10.575 05:05:24 rpc -- common/autotest_common.sh@959 -- # uname 00:05:10.575 05:05:24 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:10.575 05:05:24 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104088 00:05:10.575 05:05:24 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:10.575 05:05:24 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:10.575 05:05:24 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104088' 00:05:10.575 killing process with pid 104088 00:05:10.575 05:05:24 rpc -- common/autotest_common.sh@973 -- # kill 104088 00:05:10.575 05:05:24 rpc -- common/autotest_common.sh@978 -- # wait 104088 00:05:10.836 00:05:10.836 real 0m2.052s 00:05:10.836 user 0m2.606s 00:05:10.836 sys 0m0.692s 00:05:10.836 05:05:24 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:10.836 05:05:24 rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.836 ************************************ 00:05:10.836 END TEST rpc 00:05:10.836 ************************************ 00:05:11.097 05:05:24 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:11.097 05:05:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.097 05:05:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.097 05:05:24 -- common/autotest_common.sh@10 -- # set +x 00:05:11.097 ************************************ 00:05:11.097 START TEST skip_rpc 00:05:11.097 ************************************ 00:05:11.097 05:05:24 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:05:11.097 * Looking for test storage... 00:05:11.098 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:05:11.098 05:05:24 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:11.098 05:05:24 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:11.098 05:05:24 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:11.098 05:05:24 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@345 -- # : 1 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:11.098 05:05:24 skip_rpc -- scripts/common.sh@368 -- # return 0 00:05:11.098 05:05:24 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:11.098 05:05:24 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:11.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.098 --rc genhtml_branch_coverage=1 00:05:11.098 --rc genhtml_function_coverage=1 00:05:11.098 --rc genhtml_legend=1 00:05:11.098 --rc geninfo_all_blocks=1 00:05:11.098 --rc geninfo_unexecuted_blocks=1 00:05:11.098 00:05:11.098 ' 00:05:11.098 05:05:24 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:11.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.098 --rc genhtml_branch_coverage=1 00:05:11.098 --rc genhtml_function_coverage=1 00:05:11.098 --rc genhtml_legend=1 00:05:11.098 --rc geninfo_all_blocks=1 00:05:11.098 --rc geninfo_unexecuted_blocks=1 00:05:11.098 00:05:11.098 ' 00:05:11.098 05:05:24 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:11.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.098 --rc genhtml_branch_coverage=1 00:05:11.098 --rc genhtml_function_coverage=1 00:05:11.098 --rc genhtml_legend=1 00:05:11.098 --rc geninfo_all_blocks=1 00:05:11.098 --rc geninfo_unexecuted_blocks=1 00:05:11.098 00:05:11.098 ' 00:05:11.098 05:05:24 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:11.098 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:11.098 --rc genhtml_branch_coverage=1 00:05:11.098 --rc genhtml_function_coverage=1 00:05:11.098 --rc genhtml_legend=1 00:05:11.098 --rc geninfo_all_blocks=1 00:05:11.098 --rc geninfo_unexecuted_blocks=1 00:05:11.098 00:05:11.098 ' 00:05:11.098 05:05:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:11.098 05:05:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:11.098 05:05:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:05:11.098 05:05:24 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:11.098 05:05:24 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:11.098 05:05:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.098 ************************************ 00:05:11.098 START TEST skip_rpc 00:05:11.098 ************************************ 00:05:11.098 05:05:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:05:11.098 05:05:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=104711 00:05:11.098 05:05:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:05:11.098 05:05:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:11.098 05:05:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:05:11.358 [2024-12-15 05:05:24.827882] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:11.358 [2024-12-15 05:05:24.827918] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid104711 ] 00:05:11.358 [2024-12-15 05:05:24.897440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.358 [2024-12-15 05:05:24.919420] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 104711 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 104711 ']' 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 104711 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 104711 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 104711' 00:05:16.639 killing process with pid 104711 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 104711 00:05:16.639 05:05:29 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 104711 00:05:16.639 00:05:16.639 real 0m5.363s 00:05:16.639 user 0m5.115s 00:05:16.639 sys 0m0.287s 00:05:16.639 05:05:30 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.639 05:05:30 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.639 ************************************ 00:05:16.639 END TEST skip_rpc 00:05:16.639 ************************************ 00:05:16.639 05:05:30 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:05:16.639 05:05:30 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.639 05:05:30 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.639 05:05:30 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.639 ************************************ 00:05:16.639 START TEST skip_rpc_with_json 00:05:16.639 ************************************ 00:05:16.639 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:05:16.639 05:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:05:16.639 05:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=105633 00:05:16.639 05:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:16.639 05:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.639 05:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 105633 00:05:16.639 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 105633 ']' 00:05:16.639 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.639 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.639 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.639 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.639 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.639 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.639 [2024-12-15 05:05:30.258624] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:16.639 [2024-12-15 05:05:30.258663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid105633 ] 00:05:16.899 [2024-12-15 05:05:30.333910] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.899 [2024-12-15 05:05:30.356776] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.899 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.899 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:05:16.899 05:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:05:16.899 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.899 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.899 [2024-12-15 05:05:30.562827] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:05:16.899 request: 00:05:16.899 { 00:05:16.899 "trtype": "tcp", 00:05:16.899 "method": "nvmf_get_transports", 00:05:16.899 "req_id": 1 00:05:16.899 } 00:05:16.899 Got JSON-RPC error response 00:05:16.899 response: 00:05:16.899 { 00:05:16.899 "code": -19, 00:05:16.899 "message": "No such device" 00:05:16.899 } 00:05:16.899 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:16.899 05:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:05:16.899 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.899 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:16.899 [2024-12-15 05:05:30.574940] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:16.899 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:16.899 05:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:05:16.899 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:16.899 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:17.160 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.160 05:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:17.160 { 00:05:17.160 "subsystems": [ 00:05:17.160 { 00:05:17.160 "subsystem": "fsdev", 00:05:17.160 "config": [ 00:05:17.160 { 00:05:17.160 "method": "fsdev_set_opts", 00:05:17.160 "params": { 00:05:17.160 "fsdev_io_pool_size": 65535, 00:05:17.160 "fsdev_io_cache_size": 256 00:05:17.160 } 00:05:17.160 } 00:05:17.160 ] 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "vfio_user_target", 00:05:17.160 "config": null 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "keyring", 00:05:17.160 "config": [] 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "iobuf", 00:05:17.160 "config": [ 00:05:17.160 { 00:05:17.160 "method": "iobuf_set_options", 00:05:17.160 "params": { 00:05:17.160 "small_pool_count": 8192, 00:05:17.160 "large_pool_count": 1024, 00:05:17.160 "small_bufsize": 8192, 00:05:17.160 "large_bufsize": 135168, 00:05:17.160 "enable_numa": false 00:05:17.160 } 00:05:17.160 } 00:05:17.160 ] 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "sock", 00:05:17.160 "config": [ 00:05:17.160 { 00:05:17.160 "method": "sock_set_default_impl", 00:05:17.160 "params": { 00:05:17.160 "impl_name": "posix" 00:05:17.160 } 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "method": "sock_impl_set_options", 00:05:17.160 "params": { 00:05:17.160 "impl_name": "ssl", 00:05:17.160 "recv_buf_size": 4096, 00:05:17.160 "send_buf_size": 4096, 00:05:17.160 "enable_recv_pipe": true, 00:05:17.160 "enable_quickack": false, 00:05:17.160 "enable_placement_id": 0, 00:05:17.160 "enable_zerocopy_send_server": true, 00:05:17.160 "enable_zerocopy_send_client": false, 00:05:17.160 "zerocopy_threshold": 0, 00:05:17.160 "tls_version": 0, 00:05:17.160 "enable_ktls": false 00:05:17.160 } 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "method": "sock_impl_set_options", 00:05:17.160 "params": { 00:05:17.160 "impl_name": "posix", 00:05:17.160 "recv_buf_size": 2097152, 00:05:17.160 "send_buf_size": 2097152, 00:05:17.160 "enable_recv_pipe": true, 00:05:17.160 "enable_quickack": false, 00:05:17.160 "enable_placement_id": 0, 00:05:17.160 "enable_zerocopy_send_server": true, 00:05:17.160 "enable_zerocopy_send_client": false, 00:05:17.160 "zerocopy_threshold": 0, 00:05:17.160 "tls_version": 0, 00:05:17.160 "enable_ktls": false 00:05:17.160 } 00:05:17.160 } 00:05:17.160 ] 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "vmd", 00:05:17.160 "config": [] 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "accel", 00:05:17.160 "config": [ 00:05:17.160 { 00:05:17.160 "method": "accel_set_options", 00:05:17.160 "params": { 00:05:17.160 "small_cache_size": 128, 00:05:17.160 "large_cache_size": 16, 00:05:17.160 "task_count": 2048, 00:05:17.160 "sequence_count": 2048, 00:05:17.160 "buf_count": 2048 00:05:17.160 } 00:05:17.160 } 00:05:17.160 ] 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "bdev", 00:05:17.160 "config": [ 00:05:17.160 { 00:05:17.160 "method": "bdev_set_options", 00:05:17.160 "params": { 00:05:17.160 "bdev_io_pool_size": 65535, 00:05:17.160 "bdev_io_cache_size": 256, 00:05:17.160 "bdev_auto_examine": true, 00:05:17.160 "iobuf_small_cache_size": 128, 00:05:17.160 "iobuf_large_cache_size": 16 00:05:17.160 } 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "method": "bdev_raid_set_options", 00:05:17.160 "params": { 00:05:17.160 "process_window_size_kb": 1024, 00:05:17.160 "process_max_bandwidth_mb_sec": 0 00:05:17.160 } 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "method": "bdev_iscsi_set_options", 00:05:17.160 "params": { 00:05:17.160 "timeout_sec": 30 00:05:17.160 } 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "method": "bdev_nvme_set_options", 00:05:17.160 "params": { 00:05:17.160 "action_on_timeout": "none", 00:05:17.160 "timeout_us": 0, 00:05:17.160 "timeout_admin_us": 0, 00:05:17.160 "keep_alive_timeout_ms": 10000, 00:05:17.160 "arbitration_burst": 0, 00:05:17.160 "low_priority_weight": 0, 00:05:17.160 "medium_priority_weight": 0, 00:05:17.160 "high_priority_weight": 0, 00:05:17.160 "nvme_adminq_poll_period_us": 10000, 00:05:17.160 "nvme_ioq_poll_period_us": 0, 00:05:17.160 "io_queue_requests": 0, 00:05:17.160 "delay_cmd_submit": true, 00:05:17.160 "transport_retry_count": 4, 00:05:17.160 "bdev_retry_count": 3, 00:05:17.160 "transport_ack_timeout": 0, 00:05:17.160 "ctrlr_loss_timeout_sec": 0, 00:05:17.160 "reconnect_delay_sec": 0, 00:05:17.160 "fast_io_fail_timeout_sec": 0, 00:05:17.160 "disable_auto_failback": false, 00:05:17.160 "generate_uuids": false, 00:05:17.160 "transport_tos": 0, 00:05:17.160 "nvme_error_stat": false, 00:05:17.160 "rdma_srq_size": 0, 00:05:17.160 "io_path_stat": false, 00:05:17.160 "allow_accel_sequence": false, 00:05:17.160 "rdma_max_cq_size": 0, 00:05:17.160 "rdma_cm_event_timeout_ms": 0, 00:05:17.160 "dhchap_digests": [ 00:05:17.160 "sha256", 00:05:17.160 "sha384", 00:05:17.160 "sha512" 00:05:17.160 ], 00:05:17.160 "dhchap_dhgroups": [ 00:05:17.160 "null", 00:05:17.160 "ffdhe2048", 00:05:17.160 "ffdhe3072", 00:05:17.160 "ffdhe4096", 00:05:17.160 "ffdhe6144", 00:05:17.160 "ffdhe8192" 00:05:17.160 ], 00:05:17.160 "rdma_umr_per_io": false 00:05:17.160 } 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "method": "bdev_nvme_set_hotplug", 00:05:17.160 "params": { 00:05:17.160 "period_us": 100000, 00:05:17.160 "enable": false 00:05:17.160 } 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "method": "bdev_wait_for_examine" 00:05:17.160 } 00:05:17.160 ] 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "scsi", 00:05:17.160 "config": null 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "scheduler", 00:05:17.160 "config": [ 00:05:17.160 { 00:05:17.160 "method": "framework_set_scheduler", 00:05:17.160 "params": { 00:05:17.160 "name": "static" 00:05:17.160 } 00:05:17.160 } 00:05:17.160 ] 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "vhost_scsi", 00:05:17.160 "config": [] 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "vhost_blk", 00:05:17.160 "config": [] 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "ublk", 00:05:17.160 "config": [] 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "nbd", 00:05:17.160 "config": [] 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "subsystem": "nvmf", 00:05:17.160 "config": [ 00:05:17.160 { 00:05:17.160 "method": "nvmf_set_config", 00:05:17.160 "params": { 00:05:17.160 "discovery_filter": "match_any", 00:05:17.160 "admin_cmd_passthru": { 00:05:17.160 "identify_ctrlr": false 00:05:17.160 }, 00:05:17.160 "dhchap_digests": [ 00:05:17.160 "sha256", 00:05:17.160 "sha384", 00:05:17.160 "sha512" 00:05:17.160 ], 00:05:17.160 "dhchap_dhgroups": [ 00:05:17.160 "null", 00:05:17.160 "ffdhe2048", 00:05:17.160 "ffdhe3072", 00:05:17.160 "ffdhe4096", 00:05:17.160 "ffdhe6144", 00:05:17.160 "ffdhe8192" 00:05:17.160 ] 00:05:17.160 } 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "method": "nvmf_set_max_subsystems", 00:05:17.160 "params": { 00:05:17.160 "max_subsystems": 1024 00:05:17.160 } 00:05:17.160 }, 00:05:17.160 { 00:05:17.160 "method": "nvmf_set_crdt", 00:05:17.160 "params": { 00:05:17.160 "crdt1": 0, 00:05:17.160 "crdt2": 0, 00:05:17.160 "crdt3": 0 00:05:17.160 } 00:05:17.161 }, 00:05:17.161 { 00:05:17.161 "method": "nvmf_create_transport", 00:05:17.161 "params": { 00:05:17.161 "trtype": "TCP", 00:05:17.161 "max_queue_depth": 128, 00:05:17.161 "max_io_qpairs_per_ctrlr": 127, 00:05:17.161 "in_capsule_data_size": 4096, 00:05:17.161 "max_io_size": 131072, 00:05:17.161 "io_unit_size": 131072, 00:05:17.161 "max_aq_depth": 128, 00:05:17.161 "num_shared_buffers": 511, 00:05:17.161 "buf_cache_size": 4294967295, 00:05:17.161 "dif_insert_or_strip": false, 00:05:17.161 "zcopy": false, 00:05:17.161 "c2h_success": true, 00:05:17.161 "sock_priority": 0, 00:05:17.161 "abort_timeout_sec": 1, 00:05:17.161 "ack_timeout": 0, 00:05:17.161 "data_wr_pool_size": 0 00:05:17.161 } 00:05:17.161 } 00:05:17.161 ] 00:05:17.161 }, 00:05:17.161 { 00:05:17.161 "subsystem": "iscsi", 00:05:17.161 "config": [ 00:05:17.161 { 00:05:17.161 "method": "iscsi_set_options", 00:05:17.161 "params": { 00:05:17.161 "node_base": "iqn.2016-06.io.spdk", 00:05:17.161 "max_sessions": 128, 00:05:17.161 "max_connections_per_session": 2, 00:05:17.161 "max_queue_depth": 64, 00:05:17.161 "default_time2wait": 2, 00:05:17.161 "default_time2retain": 20, 00:05:17.161 "first_burst_length": 8192, 00:05:17.161 "immediate_data": true, 00:05:17.161 "allow_duplicated_isid": false, 00:05:17.161 "error_recovery_level": 0, 00:05:17.161 "nop_timeout": 60, 00:05:17.161 "nop_in_interval": 30, 00:05:17.161 "disable_chap": false, 00:05:17.161 "require_chap": false, 00:05:17.161 "mutual_chap": false, 00:05:17.161 "chap_group": 0, 00:05:17.161 "max_large_datain_per_connection": 64, 00:05:17.161 "max_r2t_per_connection": 4, 00:05:17.161 "pdu_pool_size": 36864, 00:05:17.161 "immediate_data_pool_size": 16384, 00:05:17.161 "data_out_pool_size": 2048 00:05:17.161 } 00:05:17.161 } 00:05:17.161 ] 00:05:17.161 } 00:05:17.161 ] 00:05:17.161 } 00:05:17.161 05:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:05:17.161 05:05:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 105633 00:05:17.161 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105633 ']' 00:05:17.161 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105633 00:05:17.161 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:17.161 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.161 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105633 00:05:17.161 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.161 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.161 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105633' 00:05:17.161 killing process with pid 105633 00:05:17.161 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105633 00:05:17.161 05:05:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105633 00:05:17.421 05:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=105757 00:05:17.421 05:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:05:17.421 05:05:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:22.708 05:05:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 105757 00:05:22.709 05:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 105757 ']' 00:05:22.709 05:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 105757 00:05:22.709 05:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:05:22.709 05:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.709 05:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 105757 00:05:22.709 05:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.709 05:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.709 05:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 105757' 00:05:22.709 killing process with pid 105757 00:05:22.709 05:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 105757 00:05:22.709 05:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 105757 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:05:22.969 00:05:22.969 real 0m6.242s 00:05:22.969 user 0m5.942s 00:05:22.969 sys 0m0.594s 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:05:22.969 ************************************ 00:05:22.969 END TEST skip_rpc_with_json 00:05:22.969 ************************************ 00:05:22.969 05:05:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:05:22.969 05:05:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.969 05:05:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.969 05:05:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:22.969 ************************************ 00:05:22.969 START TEST skip_rpc_with_delay 00:05:22.969 ************************************ 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:05:22.969 [2024-12-15 05:05:36.575056] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:22.969 00:05:22.969 real 0m0.069s 00:05:22.969 user 0m0.039s 00:05:22.969 sys 0m0.029s 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:22.969 05:05:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:05:22.969 ************************************ 00:05:22.969 END TEST skip_rpc_with_delay 00:05:22.969 ************************************ 00:05:22.969 05:05:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:05:22.969 05:05:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:05:22.969 05:05:36 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:05:22.969 05:05:36 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:22.969 05:05:36 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:22.969 05:05:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.229 ************************************ 00:05:23.229 START TEST exit_on_failed_rpc_init 00:05:23.229 ************************************ 00:05:23.229 05:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:05:23.229 05:05:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=106792 00:05:23.229 05:05:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 106792 00:05:23.229 05:05:36 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:23.229 05:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 106792 ']' 00:05:23.229 05:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.229 05:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.229 05:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.229 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.229 05:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.229 05:05:36 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:23.229 [2024-12-15 05:05:36.712677] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:23.229 [2024-12-15 05:05:36.712717] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106792 ] 00:05:23.229 [2024-12-15 05:05:36.787505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.229 [2024-12-15 05:05:36.810490] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:05:23.490 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:05:23.490 [2024-12-15 05:05:37.069089] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:23.490 [2024-12-15 05:05:37.069133] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid106820 ] 00:05:23.490 [2024-12-15 05:05:37.143129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.490 [2024-12-15 05:05:37.165272] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.490 [2024-12-15 05:05:37.165324] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:05:23.490 [2024-12-15 05:05:37.165333] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:05:23.490 [2024-12-15 05:05:37.165339] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 106792 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 106792 ']' 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 106792 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 106792 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 106792' 00:05:23.750 killing process with pid 106792 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 106792 00:05:23.750 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 106792 00:05:24.010 00:05:24.010 real 0m0.887s 00:05:24.010 user 0m0.919s 00:05:24.010 sys 0m0.388s 00:05:24.010 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.010 05:05:37 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:05:24.010 ************************************ 00:05:24.010 END TEST exit_on_failed_rpc_init 00:05:24.010 ************************************ 00:05:24.010 05:05:37 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:05:24.010 00:05:24.010 real 0m13.017s 00:05:24.010 user 0m12.230s 00:05:24.010 sys 0m1.568s 00:05:24.010 05:05:37 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.010 05:05:37 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.010 ************************************ 00:05:24.010 END TEST skip_rpc 00:05:24.010 ************************************ 00:05:24.010 05:05:37 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:24.010 05:05:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.010 05:05:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.010 05:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:24.010 ************************************ 00:05:24.010 START TEST rpc_client 00:05:24.010 ************************************ 00:05:24.010 05:05:37 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:05:24.271 * Looking for test storage... 00:05:24.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:05:24.271 05:05:37 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.271 05:05:37 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.271 05:05:37 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.271 05:05:37 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@345 -- # : 1 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@353 -- # local d=1 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@355 -- # echo 1 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@353 -- # local d=2 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@355 -- # echo 2 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.271 05:05:37 rpc_client -- scripts/common.sh@368 -- # return 0 00:05:24.271 05:05:37 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.271 05:05:37 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.271 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.271 --rc genhtml_branch_coverage=1 00:05:24.271 --rc genhtml_function_coverage=1 00:05:24.271 --rc genhtml_legend=1 00:05:24.271 --rc geninfo_all_blocks=1 00:05:24.271 --rc geninfo_unexecuted_blocks=1 00:05:24.272 00:05:24.272 ' 00:05:24.272 05:05:37 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.272 --rc genhtml_branch_coverage=1 00:05:24.272 --rc genhtml_function_coverage=1 00:05:24.272 --rc genhtml_legend=1 00:05:24.272 --rc geninfo_all_blocks=1 00:05:24.272 --rc geninfo_unexecuted_blocks=1 00:05:24.272 00:05:24.272 ' 00:05:24.272 05:05:37 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.272 --rc genhtml_branch_coverage=1 00:05:24.272 --rc genhtml_function_coverage=1 00:05:24.272 --rc genhtml_legend=1 00:05:24.272 --rc geninfo_all_blocks=1 00:05:24.272 --rc geninfo_unexecuted_blocks=1 00:05:24.272 00:05:24.272 ' 00:05:24.272 05:05:37 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.272 --rc genhtml_branch_coverage=1 00:05:24.272 --rc genhtml_function_coverage=1 00:05:24.272 --rc genhtml_legend=1 00:05:24.272 --rc geninfo_all_blocks=1 00:05:24.272 --rc geninfo_unexecuted_blocks=1 00:05:24.272 00:05:24.272 ' 00:05:24.272 05:05:37 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:05:24.272 OK 00:05:24.272 05:05:37 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:05:24.272 00:05:24.272 real 0m0.199s 00:05:24.272 user 0m0.116s 00:05:24.272 sys 0m0.097s 00:05:24.272 05:05:37 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:24.272 05:05:37 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:05:24.272 ************************************ 00:05:24.272 END TEST rpc_client 00:05:24.272 ************************************ 00:05:24.272 05:05:37 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:24.272 05:05:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:24.272 05:05:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:24.272 05:05:37 -- common/autotest_common.sh@10 -- # set +x 00:05:24.272 ************************************ 00:05:24.272 START TEST json_config 00:05:24.272 ************************************ 00:05:24.272 05:05:37 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:05:24.533 05:05:37 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:24.533 05:05:37 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:05:24.533 05:05:37 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:24.533 05:05:38 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:24.533 05:05:38 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:24.533 05:05:38 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:24.533 05:05:38 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:24.533 05:05:38 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:05:24.533 05:05:38 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:05:24.533 05:05:38 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:05:24.533 05:05:38 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:05:24.533 05:05:38 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:05:24.533 05:05:38 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:05:24.533 05:05:38 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:05:24.533 05:05:38 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:24.533 05:05:38 json_config -- scripts/common.sh@344 -- # case "$op" in 00:05:24.533 05:05:38 json_config -- scripts/common.sh@345 -- # : 1 00:05:24.533 05:05:38 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:24.533 05:05:38 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:24.533 05:05:38 json_config -- scripts/common.sh@365 -- # decimal 1 00:05:24.533 05:05:38 json_config -- scripts/common.sh@353 -- # local d=1 00:05:24.533 05:05:38 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:24.533 05:05:38 json_config -- scripts/common.sh@355 -- # echo 1 00:05:24.533 05:05:38 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:05:24.533 05:05:38 json_config -- scripts/common.sh@366 -- # decimal 2 00:05:24.533 05:05:38 json_config -- scripts/common.sh@353 -- # local d=2 00:05:24.533 05:05:38 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:24.533 05:05:38 json_config -- scripts/common.sh@355 -- # echo 2 00:05:24.533 05:05:38 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:05:24.533 05:05:38 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:24.533 05:05:38 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:24.533 05:05:38 json_config -- scripts/common.sh@368 -- # return 0 00:05:24.533 05:05:38 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:24.533 05:05:38 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:24.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.533 --rc genhtml_branch_coverage=1 00:05:24.533 --rc genhtml_function_coverage=1 00:05:24.533 --rc genhtml_legend=1 00:05:24.533 --rc geninfo_all_blocks=1 00:05:24.533 --rc geninfo_unexecuted_blocks=1 00:05:24.533 00:05:24.533 ' 00:05:24.533 05:05:38 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:24.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.533 --rc genhtml_branch_coverage=1 00:05:24.533 --rc genhtml_function_coverage=1 00:05:24.533 --rc genhtml_legend=1 00:05:24.533 --rc geninfo_all_blocks=1 00:05:24.533 --rc geninfo_unexecuted_blocks=1 00:05:24.533 00:05:24.533 ' 00:05:24.533 05:05:38 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:24.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.533 --rc genhtml_branch_coverage=1 00:05:24.533 --rc genhtml_function_coverage=1 00:05:24.533 --rc genhtml_legend=1 00:05:24.533 --rc geninfo_all_blocks=1 00:05:24.533 --rc geninfo_unexecuted_blocks=1 00:05:24.533 00:05:24.533 ' 00:05:24.533 05:05:38 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:24.533 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:24.533 --rc genhtml_branch_coverage=1 00:05:24.533 --rc genhtml_function_coverage=1 00:05:24.533 --rc genhtml_legend=1 00:05:24.533 --rc geninfo_all_blocks=1 00:05:24.533 --rc geninfo_unexecuted_blocks=1 00:05:24.533 00:05:24.533 ' 00:05:24.533 05:05:38 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@7 -- # uname -s 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:24.533 05:05:38 json_config -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:24.533 05:05:38 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:05:24.533 05:05:38 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:24.533 05:05:38 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:24.533 05:05:38 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:24.533 05:05:38 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.533 05:05:38 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.534 05:05:38 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.534 05:05:38 json_config -- paths/export.sh@5 -- # export PATH 00:05:24.534 05:05:38 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:24.534 05:05:38 json_config -- nvmf/common.sh@51 -- # : 0 00:05:24.534 05:05:38 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:24.534 05:05:38 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:24.534 05:05:38 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:24.534 05:05:38 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:24.534 05:05:38 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:24.534 05:05:38 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:24.534 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:24.534 05:05:38 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:24.534 05:05:38 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:24.534 05:05:38 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:05:24.534 INFO: JSON configuration test init 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:05:24.534 05:05:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.534 05:05:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:05:24.534 05:05:38 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:24.534 05:05:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.534 05:05:38 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:05:24.534 05:05:38 json_config -- json_config/common.sh@9 -- # local app=target 00:05:24.534 05:05:38 json_config -- json_config/common.sh@10 -- # shift 00:05:24.534 05:05:38 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:24.534 05:05:38 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:24.534 05:05:38 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:24.534 05:05:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.534 05:05:38 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:24.534 05:05:38 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=107166 00:05:24.534 05:05:38 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:24.534 Waiting for target to run... 00:05:24.534 05:05:38 json_config -- json_config/common.sh@25 -- # waitforlisten 107166 /var/tmp/spdk_tgt.sock 00:05:24.534 05:05:38 json_config -- common/autotest_common.sh@835 -- # '[' -z 107166 ']' 00:05:24.534 05:05:38 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:05:24.534 05:05:38 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:24.534 05:05:38 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.534 05:05:38 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:24.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:24.534 05:05:38 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.534 05:05:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:24.534 [2024-12-15 05:05:38.182436] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:24.534 [2024-12-15 05:05:38.182482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid107166 ] 00:05:25.104 [2024-12-15 05:05:38.634548] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.104 [2024-12-15 05:05:38.656352] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:25.364 05:05:39 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:25.364 05:05:39 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:25.364 05:05:39 json_config -- json_config/common.sh@26 -- # echo '' 00:05:25.364 00:05:25.364 05:05:39 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:05:25.364 05:05:39 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:05:25.364 05:05:39 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:25.364 05:05:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.364 05:05:39 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:05:25.364 05:05:39 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:05:25.364 05:05:39 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:25.364 05:05:39 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:25.624 05:05:39 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:05:25.624 05:05:39 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:05:25.624 05:05:39 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:28.917 05:05:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.917 05:05:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:05:28.917 05:05:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@51 -- # local get_types 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@54 -- # sort 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:05:28.917 05:05:42 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:05:28.918 05:05:42 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:28.918 05:05:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.918 05:05:42 json_config -- json_config/json_config.sh@62 -- # return 0 00:05:28.918 05:05:42 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:05:28.918 05:05:42 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:05:28.918 05:05:42 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:05:28.918 05:05:42 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:05:28.918 05:05:42 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:05:28.918 05:05:42 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:05:28.918 05:05:42 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:28.918 05:05:42 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:28.918 05:05:42 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:28.918 05:05:42 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:05:28.918 05:05:42 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:05:28.918 05:05:42 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.918 05:05:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:28.918 MallocForNvmf0 00:05:28.918 05:05:42 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:28.918 05:05:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:29.177 MallocForNvmf1 00:05:29.177 05:05:42 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.177 05:05:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:29.437 [2024-12-15 05:05:42.932911] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:29.437 05:05:42 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.437 05:05:42 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:29.696 05:05:43 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.696 05:05:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:29.696 05:05:43 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.696 05:05:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:29.956 05:05:43 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:29.956 05:05:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:30.215 [2024-12-15 05:05:43.727277] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:30.215 05:05:43 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:05:30.215 05:05:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.215 05:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.215 05:05:43 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:05:30.215 05:05:43 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.215 05:05:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.215 05:05:43 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:05:30.215 05:05:43 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.215 05:05:43 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:30.474 MallocBdevForConfigChangeCheck 00:05:30.475 05:05:44 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:05:30.475 05:05:44 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:30.475 05:05:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:30.475 05:05:44 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:05:30.475 05:05:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:30.734 05:05:44 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:05:30.734 INFO: shutting down applications... 00:05:30.734 05:05:44 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:05:30.734 05:05:44 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:05:30.734 05:05:44 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:05:30.734 05:05:44 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:32.641 Calling clear_iscsi_subsystem 00:05:32.641 Calling clear_nvmf_subsystem 00:05:32.641 Calling clear_nbd_subsystem 00:05:32.641 Calling clear_ublk_subsystem 00:05:32.641 Calling clear_vhost_blk_subsystem 00:05:32.641 Calling clear_vhost_scsi_subsystem 00:05:32.641 Calling clear_bdev_subsystem 00:05:32.641 05:05:45 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:32.641 05:05:45 json_config -- json_config/json_config.sh@350 -- # count=100 00:05:32.641 05:05:45 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:05:32.641 05:05:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:32.641 05:05:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:32.641 05:05:45 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:32.641 05:05:46 json_config -- json_config/json_config.sh@352 -- # break 00:05:32.641 05:05:46 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:05:32.641 05:05:46 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:05:32.641 05:05:46 json_config -- json_config/common.sh@31 -- # local app=target 00:05:32.641 05:05:46 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:32.641 05:05:46 json_config -- json_config/common.sh@35 -- # [[ -n 107166 ]] 00:05:32.901 05:05:46 json_config -- json_config/common.sh@38 -- # kill -SIGINT 107166 00:05:32.901 05:05:46 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:32.901 05:05:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:32.901 05:05:46 json_config -- json_config/common.sh@41 -- # kill -0 107166 00:05:32.901 05:05:46 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:33.161 05:05:46 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:33.161 05:05:46 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:33.161 05:05:46 json_config -- json_config/common.sh@41 -- # kill -0 107166 00:05:33.161 05:05:46 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:33.161 05:05:46 json_config -- json_config/common.sh@43 -- # break 00:05:33.161 05:05:46 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:33.161 05:05:46 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:33.161 SPDK target shutdown done 00:05:33.161 05:05:46 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:05:33.161 INFO: relaunching applications... 00:05:33.161 05:05:46 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.161 05:05:46 json_config -- json_config/common.sh@9 -- # local app=target 00:05:33.161 05:05:46 json_config -- json_config/common.sh@10 -- # shift 00:05:33.161 05:05:46 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:33.161 05:05:46 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:33.161 05:05:46 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:33.161 05:05:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.161 05:05:46 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:33.161 05:05:46 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=108641 00:05:33.161 05:05:46 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:33.161 Waiting for target to run... 00:05:33.161 05:05:46 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:33.161 05:05:46 json_config -- json_config/common.sh@25 -- # waitforlisten 108641 /var/tmp/spdk_tgt.sock 00:05:33.161 05:05:46 json_config -- common/autotest_common.sh@835 -- # '[' -z 108641 ']' 00:05:33.161 05:05:46 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:33.161 05:05:46 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:33.161 05:05:46 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:33.162 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:33.162 05:05:46 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:33.162 05:05:46 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:33.421 [2024-12-15 05:05:46.890596] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:33.421 [2024-12-15 05:05:46.890660] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid108641 ] 00:05:33.681 [2024-12-15 05:05:47.357339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.940 [2024-12-15 05:05:47.376724] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.236 [2024-12-15 05:05:50.380932] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:37.236 [2024-12-15 05:05:50.413206] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:37.495 05:05:51 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:37.495 05:05:51 json_config -- common/autotest_common.sh@868 -- # return 0 00:05:37.495 05:05:51 json_config -- json_config/common.sh@26 -- # echo '' 00:05:37.495 00:05:37.495 05:05:51 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:05:37.495 05:05:51 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:37.495 INFO: Checking if target configuration is the same... 00:05:37.495 05:05:51 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.495 05:05:51 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:05:37.495 05:05:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:37.495 + '[' 2 -ne 2 ']' 00:05:37.495 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:37.495 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:37.495 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:37.495 +++ basename /dev/fd/62 00:05:37.495 ++ mktemp /tmp/62.XXX 00:05:37.495 + tmp_file_1=/tmp/62.Ww2 00:05:37.495 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:37.495 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:37.495 + tmp_file_2=/tmp/spdk_tgt_config.json.hjR 00:05:37.495 + ret=0 00:05:37.495 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.064 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.064 + diff -u /tmp/62.Ww2 /tmp/spdk_tgt_config.json.hjR 00:05:38.064 + echo 'INFO: JSON config files are the same' 00:05:38.064 INFO: JSON config files are the same 00:05:38.064 + rm /tmp/62.Ww2 /tmp/spdk_tgt_config.json.hjR 00:05:38.064 + exit 0 00:05:38.064 05:05:51 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:05:38.064 05:05:51 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:38.064 INFO: changing configuration and checking if this can be detected... 00:05:38.064 05:05:51 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.064 05:05:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:38.064 05:05:51 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.064 05:05:51 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:05:38.064 05:05:51 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:38.064 + '[' 2 -ne 2 ']' 00:05:38.064 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:38.064 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:38.064 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:38.064 +++ basename /dev/fd/62 00:05:38.064 ++ mktemp /tmp/62.XXX 00:05:38.064 + tmp_file_1=/tmp/62.way 00:05:38.064 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:38.064 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:38.064 + tmp_file_2=/tmp/spdk_tgt_config.json.nZ4 00:05:38.064 + ret=0 00:05:38.064 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.633 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:38.633 + diff -u /tmp/62.way /tmp/spdk_tgt_config.json.nZ4 00:05:38.633 + ret=1 00:05:38.633 + echo '=== Start of file: /tmp/62.way ===' 00:05:38.633 + cat /tmp/62.way 00:05:38.633 + echo '=== End of file: /tmp/62.way ===' 00:05:38.633 + echo '' 00:05:38.633 + echo '=== Start of file: /tmp/spdk_tgt_config.json.nZ4 ===' 00:05:38.633 + cat /tmp/spdk_tgt_config.json.nZ4 00:05:38.633 + echo '=== End of file: /tmp/spdk_tgt_config.json.nZ4 ===' 00:05:38.633 + echo '' 00:05:38.633 + rm /tmp/62.way /tmp/spdk_tgt_config.json.nZ4 00:05:38.633 + exit 1 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:05:38.633 INFO: configuration change detected. 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@324 -- # [[ -n 108641 ]] 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@200 -- # uname -s 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:38.633 05:05:52 json_config -- json_config/json_config.sh@330 -- # killprocess 108641 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@954 -- # '[' -z 108641 ']' 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@958 -- # kill -0 108641 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@959 -- # uname 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 108641 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 108641' 00:05:38.633 killing process with pid 108641 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@973 -- # kill 108641 00:05:38.633 05:05:52 json_config -- common/autotest_common.sh@978 -- # wait 108641 00:05:40.542 05:05:53 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:40.542 05:05:53 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:05:40.542 05:05:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:40.542 05:05:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.542 05:05:53 json_config -- json_config/json_config.sh@335 -- # return 0 00:05:40.542 05:05:53 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:05:40.542 INFO: Success 00:05:40.542 00:05:40.542 real 0m15.842s 00:05:40.542 user 0m16.955s 00:05:40.542 sys 0m2.056s 00:05:40.542 05:05:53 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.542 05:05:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:40.542 ************************************ 00:05:40.542 END TEST json_config 00:05:40.542 ************************************ 00:05:40.542 05:05:53 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:40.542 05:05:53 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:40.542 05:05:53 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:40.542 05:05:53 -- common/autotest_common.sh@10 -- # set +x 00:05:40.542 ************************************ 00:05:40.542 START TEST json_config_extra_key 00:05:40.542 ************************************ 00:05:40.542 05:05:53 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:40.542 05:05:53 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:40.542 05:05:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:05:40.542 05:05:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:40.542 05:05:53 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:40.542 05:05:53 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:05:40.542 05:05:53 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:40.542 05:05:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:40.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.542 --rc genhtml_branch_coverage=1 00:05:40.542 --rc genhtml_function_coverage=1 00:05:40.542 --rc genhtml_legend=1 00:05:40.542 --rc geninfo_all_blocks=1 00:05:40.542 --rc geninfo_unexecuted_blocks=1 00:05:40.542 00:05:40.542 ' 00:05:40.542 05:05:53 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:40.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.542 --rc genhtml_branch_coverage=1 00:05:40.542 --rc genhtml_function_coverage=1 00:05:40.542 --rc genhtml_legend=1 00:05:40.542 --rc geninfo_all_blocks=1 00:05:40.542 --rc geninfo_unexecuted_blocks=1 00:05:40.542 00:05:40.542 ' 00:05:40.542 05:05:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:40.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.542 --rc genhtml_branch_coverage=1 00:05:40.543 --rc genhtml_function_coverage=1 00:05:40.543 --rc genhtml_legend=1 00:05:40.543 --rc geninfo_all_blocks=1 00:05:40.543 --rc geninfo_unexecuted_blocks=1 00:05:40.543 00:05:40.543 ' 00:05:40.543 05:05:53 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:40.543 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:40.543 --rc genhtml_branch_coverage=1 00:05:40.543 --rc genhtml_function_coverage=1 00:05:40.543 --rc genhtml_legend=1 00:05:40.543 --rc geninfo_all_blocks=1 00:05:40.543 --rc geninfo_unexecuted_blocks=1 00:05:40.543 00:05:40.543 ' 00:05:40.543 05:05:53 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:40.543 05:05:53 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:40.543 05:05:53 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:40.543 05:05:53 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:40.543 05:05:53 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:40.543 05:05:53 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:40.543 05:05:53 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:40.543 05:05:53 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:40.543 05:05:53 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:40.543 05:05:53 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:40.543 05:05:53 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:40.543 05:05:53 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:40.543 05:05:54 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:05:40.543 05:05:54 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:40.543 05:05:54 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:40.543 05:05:54 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:40.543 05:05:54 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.543 05:05:54 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.543 05:05:54 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.543 05:05:54 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:40.543 05:05:54 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:05:40.543 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:05:40.543 05:05:54 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:05:40.543 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:40.543 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:40.543 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:40.543 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:40.543 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:40.543 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:40.543 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:40.543 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:40.543 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:40.543 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:40.543 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:40.543 INFO: launching applications... 00:05:40.543 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:40.543 05:05:54 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:40.543 05:05:54 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:40.543 05:05:54 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:40.543 05:05:54 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:40.543 05:05:54 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:40.543 05:05:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.543 05:05:54 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:40.543 05:05:54 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=109969 00:05:40.543 05:05:54 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:40.543 Waiting for target to run... 00:05:40.543 05:05:54 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 109969 /var/tmp/spdk_tgt.sock 00:05:40.543 05:05:54 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 109969 ']' 00:05:40.543 05:05:54 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:40.543 05:05:54 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:40.543 05:05:54 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:40.543 05:05:54 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:40.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:40.543 05:05:54 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:40.543 05:05:54 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:40.543 [2024-12-15 05:05:54.076423] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:40.543 [2024-12-15 05:05:54.076470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid109969 ] 00:05:41.112 [2024-12-15 05:05:54.530399] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.112 [2024-12-15 05:05:54.552469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.372 05:05:54 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:41.372 05:05:54 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:05:41.372 05:05:54 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:41.372 00:05:41.372 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:41.372 INFO: shutting down applications... 00:05:41.372 05:05:54 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:41.372 05:05:54 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:41.372 05:05:54 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:41.372 05:05:54 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 109969 ]] 00:05:41.372 05:05:54 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 109969 00:05:41.372 05:05:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:41.372 05:05:54 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.372 05:05:54 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 109969 00:05:41.372 05:05:54 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:41.941 05:05:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:41.941 05:05:55 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:41.941 05:05:55 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 109969 00:05:41.941 05:05:55 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:41.941 05:05:55 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:41.941 05:05:55 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:41.941 05:05:55 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:41.941 SPDK target shutdown done 00:05:41.941 05:05:55 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:41.941 Success 00:05:41.941 00:05:41.941 real 0m1.573s 00:05:41.941 user 0m1.168s 00:05:41.941 sys 0m0.579s 00:05:41.941 05:05:55 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.941 05:05:55 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:41.941 ************************************ 00:05:41.941 END TEST json_config_extra_key 00:05:41.941 ************************************ 00:05:41.941 05:05:55 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.941 05:05:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:41.942 05:05:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.942 05:05:55 -- common/autotest_common.sh@10 -- # set +x 00:05:41.942 ************************************ 00:05:41.942 START TEST alias_rpc 00:05:41.942 ************************************ 00:05:41.942 05:05:55 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:41.942 * Looking for test storage... 00:05:41.942 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:41.942 05:05:55 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:41.942 05:05:55 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:05:41.942 05:05:55 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.201 05:05:55 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@345 -- # : 1 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.201 05:05:55 alias_rpc -- scripts/common.sh@368 -- # return 0 00:05:42.201 05:05:55 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.201 05:05:55 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.201 --rc genhtml_branch_coverage=1 00:05:42.201 --rc genhtml_function_coverage=1 00:05:42.201 --rc genhtml_legend=1 00:05:42.201 --rc geninfo_all_blocks=1 00:05:42.201 --rc geninfo_unexecuted_blocks=1 00:05:42.201 00:05:42.201 ' 00:05:42.201 05:05:55 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.201 --rc genhtml_branch_coverage=1 00:05:42.201 --rc genhtml_function_coverage=1 00:05:42.201 --rc genhtml_legend=1 00:05:42.201 --rc geninfo_all_blocks=1 00:05:42.202 --rc geninfo_unexecuted_blocks=1 00:05:42.202 00:05:42.202 ' 00:05:42.202 05:05:55 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.202 --rc genhtml_branch_coverage=1 00:05:42.202 --rc genhtml_function_coverage=1 00:05:42.202 --rc genhtml_legend=1 00:05:42.202 --rc geninfo_all_blocks=1 00:05:42.202 --rc geninfo_unexecuted_blocks=1 00:05:42.202 00:05:42.202 ' 00:05:42.202 05:05:55 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.202 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.202 --rc genhtml_branch_coverage=1 00:05:42.202 --rc genhtml_function_coverage=1 00:05:42.202 --rc genhtml_legend=1 00:05:42.202 --rc geninfo_all_blocks=1 00:05:42.202 --rc geninfo_unexecuted_blocks=1 00:05:42.202 00:05:42.202 ' 00:05:42.202 05:05:55 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:42.202 05:05:55 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=110385 00:05:42.202 05:05:55 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 110385 00:05:42.202 05:05:55 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:42.202 05:05:55 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 110385 ']' 00:05:42.202 05:05:55 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.202 05:05:55 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:42.202 05:05:55 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.202 05:05:55 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:42.202 05:05:55 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.202 [2024-12-15 05:05:55.717878] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:42.202 [2024-12-15 05:05:55.717925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110385 ] 00:05:42.202 [2024-12-15 05:05:55.794213] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.202 [2024-12-15 05:05:55.816602] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.461 05:05:56 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:42.461 05:05:56 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:42.461 05:05:56 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:42.720 05:05:56 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 110385 00:05:42.720 05:05:56 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 110385 ']' 00:05:42.720 05:05:56 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 110385 00:05:42.720 05:05:56 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:05:42.720 05:05:56 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:42.720 05:05:56 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110385 00:05:42.720 05:05:56 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:42.720 05:05:56 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:42.720 05:05:56 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110385' 00:05:42.720 killing process with pid 110385 00:05:42.720 05:05:56 alias_rpc -- common/autotest_common.sh@973 -- # kill 110385 00:05:42.720 05:05:56 alias_rpc -- common/autotest_common.sh@978 -- # wait 110385 00:05:42.980 00:05:42.980 real 0m1.102s 00:05:42.980 user 0m1.101s 00:05:42.980 sys 0m0.428s 00:05:42.980 05:05:56 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.980 05:05:56 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:42.980 ************************************ 00:05:42.980 END TEST alias_rpc 00:05:42.980 ************************************ 00:05:42.980 05:05:56 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:05:42.980 05:05:56 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:42.980 05:05:56 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.980 05:05:56 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.980 05:05:56 -- common/autotest_common.sh@10 -- # set +x 00:05:42.980 ************************************ 00:05:42.980 START TEST spdkcli_tcp 00:05:42.980 ************************************ 00:05:42.980 05:05:56 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:43.239 * Looking for test storage... 00:05:43.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:43.239 05:05:56 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:43.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.239 --rc genhtml_branch_coverage=1 00:05:43.239 --rc genhtml_function_coverage=1 00:05:43.239 --rc genhtml_legend=1 00:05:43.239 --rc geninfo_all_blocks=1 00:05:43.239 --rc geninfo_unexecuted_blocks=1 00:05:43.239 00:05:43.239 ' 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:43.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.239 --rc genhtml_branch_coverage=1 00:05:43.239 --rc genhtml_function_coverage=1 00:05:43.239 --rc genhtml_legend=1 00:05:43.239 --rc geninfo_all_blocks=1 00:05:43.239 --rc geninfo_unexecuted_blocks=1 00:05:43.239 00:05:43.239 ' 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:43.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.239 --rc genhtml_branch_coverage=1 00:05:43.239 --rc genhtml_function_coverage=1 00:05:43.239 --rc genhtml_legend=1 00:05:43.239 --rc geninfo_all_blocks=1 00:05:43.239 --rc geninfo_unexecuted_blocks=1 00:05:43.239 00:05:43.239 ' 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:43.239 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:43.239 --rc genhtml_branch_coverage=1 00:05:43.239 --rc genhtml_function_coverage=1 00:05:43.239 --rc genhtml_legend=1 00:05:43.239 --rc geninfo_all_blocks=1 00:05:43.239 --rc geninfo_unexecuted_blocks=1 00:05:43.239 00:05:43.239 ' 00:05:43.239 05:05:56 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:43.239 05:05:56 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:43.239 05:05:56 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:43.239 05:05:56 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:43.239 05:05:56 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:43.239 05:05:56 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:43.239 05:05:56 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.239 05:05:56 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=110591 00:05:43.239 05:05:56 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 110591 00:05:43.239 05:05:56 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 110591 ']' 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:43.239 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:43.239 05:05:56 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.239 [2024-12-15 05:05:56.901561] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:43.239 [2024-12-15 05:05:56.901614] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110591 ] 00:05:43.499 [2024-12-15 05:05:56.975486] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:43.499 [2024-12-15 05:05:56.998810] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:43.499 [2024-12-15 05:05:56.998813] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.758 05:05:57 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:43.758 05:05:57 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:05:43.758 05:05:57 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=110679 00:05:43.758 05:05:57 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:43.758 05:05:57 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:43.758 [ 00:05:43.758 "bdev_malloc_delete", 00:05:43.758 "bdev_malloc_create", 00:05:43.758 "bdev_null_resize", 00:05:43.758 "bdev_null_delete", 00:05:43.758 "bdev_null_create", 00:05:43.758 "bdev_nvme_cuse_unregister", 00:05:43.758 "bdev_nvme_cuse_register", 00:05:43.758 "bdev_opal_new_user", 00:05:43.758 "bdev_opal_set_lock_state", 00:05:43.758 "bdev_opal_delete", 00:05:43.758 "bdev_opal_get_info", 00:05:43.758 "bdev_opal_create", 00:05:43.758 "bdev_nvme_opal_revert", 00:05:43.758 "bdev_nvme_opal_init", 00:05:43.758 "bdev_nvme_send_cmd", 00:05:43.758 "bdev_nvme_set_keys", 00:05:43.758 "bdev_nvme_get_path_iostat", 00:05:43.758 "bdev_nvme_get_mdns_discovery_info", 00:05:43.758 "bdev_nvme_stop_mdns_discovery", 00:05:43.758 "bdev_nvme_start_mdns_discovery", 00:05:43.758 "bdev_nvme_set_multipath_policy", 00:05:43.758 "bdev_nvme_set_preferred_path", 00:05:43.758 "bdev_nvme_get_io_paths", 00:05:43.758 "bdev_nvme_remove_error_injection", 00:05:43.758 "bdev_nvme_add_error_injection", 00:05:43.758 "bdev_nvme_get_discovery_info", 00:05:43.758 "bdev_nvme_stop_discovery", 00:05:43.758 "bdev_nvme_start_discovery", 00:05:43.758 "bdev_nvme_get_controller_health_info", 00:05:43.758 "bdev_nvme_disable_controller", 00:05:43.758 "bdev_nvme_enable_controller", 00:05:43.758 "bdev_nvme_reset_controller", 00:05:43.758 "bdev_nvme_get_transport_statistics", 00:05:43.758 "bdev_nvme_apply_firmware", 00:05:43.758 "bdev_nvme_detach_controller", 00:05:43.758 "bdev_nvme_get_controllers", 00:05:43.758 "bdev_nvme_attach_controller", 00:05:43.758 "bdev_nvme_set_hotplug", 00:05:43.758 "bdev_nvme_set_options", 00:05:43.758 "bdev_passthru_delete", 00:05:43.758 "bdev_passthru_create", 00:05:43.758 "bdev_lvol_set_parent_bdev", 00:05:43.758 "bdev_lvol_set_parent", 00:05:43.758 "bdev_lvol_check_shallow_copy", 00:05:43.758 "bdev_lvol_start_shallow_copy", 00:05:43.758 "bdev_lvol_grow_lvstore", 00:05:43.758 "bdev_lvol_get_lvols", 00:05:43.758 "bdev_lvol_get_lvstores", 00:05:43.758 "bdev_lvol_delete", 00:05:43.758 "bdev_lvol_set_read_only", 00:05:43.758 "bdev_lvol_resize", 00:05:43.758 "bdev_lvol_decouple_parent", 00:05:43.758 "bdev_lvol_inflate", 00:05:43.758 "bdev_lvol_rename", 00:05:43.758 "bdev_lvol_clone_bdev", 00:05:43.758 "bdev_lvol_clone", 00:05:43.758 "bdev_lvol_snapshot", 00:05:43.758 "bdev_lvol_create", 00:05:43.758 "bdev_lvol_delete_lvstore", 00:05:43.758 "bdev_lvol_rename_lvstore", 00:05:43.758 "bdev_lvol_create_lvstore", 00:05:43.758 "bdev_raid_set_options", 00:05:43.758 "bdev_raid_remove_base_bdev", 00:05:43.758 "bdev_raid_add_base_bdev", 00:05:43.758 "bdev_raid_delete", 00:05:43.758 "bdev_raid_create", 00:05:43.758 "bdev_raid_get_bdevs", 00:05:43.758 "bdev_error_inject_error", 00:05:43.758 "bdev_error_delete", 00:05:43.758 "bdev_error_create", 00:05:43.758 "bdev_split_delete", 00:05:43.758 "bdev_split_create", 00:05:43.758 "bdev_delay_delete", 00:05:43.758 "bdev_delay_create", 00:05:43.758 "bdev_delay_update_latency", 00:05:43.758 "bdev_zone_block_delete", 00:05:43.758 "bdev_zone_block_create", 00:05:43.758 "blobfs_create", 00:05:43.758 "blobfs_detect", 00:05:43.758 "blobfs_set_cache_size", 00:05:43.758 "bdev_aio_delete", 00:05:43.758 "bdev_aio_rescan", 00:05:43.758 "bdev_aio_create", 00:05:43.758 "bdev_ftl_set_property", 00:05:43.758 "bdev_ftl_get_properties", 00:05:43.758 "bdev_ftl_get_stats", 00:05:43.758 "bdev_ftl_unmap", 00:05:43.758 "bdev_ftl_unload", 00:05:43.758 "bdev_ftl_delete", 00:05:43.758 "bdev_ftl_load", 00:05:43.758 "bdev_ftl_create", 00:05:43.758 "bdev_virtio_attach_controller", 00:05:43.758 "bdev_virtio_scsi_get_devices", 00:05:43.758 "bdev_virtio_detach_controller", 00:05:43.758 "bdev_virtio_blk_set_hotplug", 00:05:43.758 "bdev_iscsi_delete", 00:05:43.758 "bdev_iscsi_create", 00:05:43.758 "bdev_iscsi_set_options", 00:05:43.758 "accel_error_inject_error", 00:05:43.758 "ioat_scan_accel_module", 00:05:43.758 "dsa_scan_accel_module", 00:05:43.758 "iaa_scan_accel_module", 00:05:43.758 "vfu_virtio_create_fs_endpoint", 00:05:43.758 "vfu_virtio_create_scsi_endpoint", 00:05:43.758 "vfu_virtio_scsi_remove_target", 00:05:43.758 "vfu_virtio_scsi_add_target", 00:05:43.758 "vfu_virtio_create_blk_endpoint", 00:05:43.758 "vfu_virtio_delete_endpoint", 00:05:43.758 "keyring_file_remove_key", 00:05:43.758 "keyring_file_add_key", 00:05:43.758 "keyring_linux_set_options", 00:05:43.758 "fsdev_aio_delete", 00:05:43.758 "fsdev_aio_create", 00:05:43.758 "iscsi_get_histogram", 00:05:43.758 "iscsi_enable_histogram", 00:05:43.758 "iscsi_set_options", 00:05:43.758 "iscsi_get_auth_groups", 00:05:43.758 "iscsi_auth_group_remove_secret", 00:05:43.758 "iscsi_auth_group_add_secret", 00:05:43.758 "iscsi_delete_auth_group", 00:05:43.758 "iscsi_create_auth_group", 00:05:43.758 "iscsi_set_discovery_auth", 00:05:43.758 "iscsi_get_options", 00:05:43.758 "iscsi_target_node_request_logout", 00:05:43.758 "iscsi_target_node_set_redirect", 00:05:43.758 "iscsi_target_node_set_auth", 00:05:43.758 "iscsi_target_node_add_lun", 00:05:43.758 "iscsi_get_stats", 00:05:43.758 "iscsi_get_connections", 00:05:43.758 "iscsi_portal_group_set_auth", 00:05:43.758 "iscsi_start_portal_group", 00:05:43.758 "iscsi_delete_portal_group", 00:05:43.758 "iscsi_create_portal_group", 00:05:43.758 "iscsi_get_portal_groups", 00:05:43.758 "iscsi_delete_target_node", 00:05:43.758 "iscsi_target_node_remove_pg_ig_maps", 00:05:43.758 "iscsi_target_node_add_pg_ig_maps", 00:05:43.758 "iscsi_create_target_node", 00:05:43.758 "iscsi_get_target_nodes", 00:05:43.758 "iscsi_delete_initiator_group", 00:05:43.758 "iscsi_initiator_group_remove_initiators", 00:05:43.758 "iscsi_initiator_group_add_initiators", 00:05:43.758 "iscsi_create_initiator_group", 00:05:43.758 "iscsi_get_initiator_groups", 00:05:43.758 "nvmf_set_crdt", 00:05:43.758 "nvmf_set_config", 00:05:43.758 "nvmf_set_max_subsystems", 00:05:43.758 "nvmf_stop_mdns_prr", 00:05:43.758 "nvmf_publish_mdns_prr", 00:05:43.758 "nvmf_subsystem_get_listeners", 00:05:43.758 "nvmf_subsystem_get_qpairs", 00:05:43.758 "nvmf_subsystem_get_controllers", 00:05:43.758 "nvmf_get_stats", 00:05:43.758 "nvmf_get_transports", 00:05:43.758 "nvmf_create_transport", 00:05:43.758 "nvmf_get_targets", 00:05:43.758 "nvmf_delete_target", 00:05:43.758 "nvmf_create_target", 00:05:43.758 "nvmf_subsystem_allow_any_host", 00:05:43.758 "nvmf_subsystem_set_keys", 00:05:43.758 "nvmf_subsystem_remove_host", 00:05:43.758 "nvmf_subsystem_add_host", 00:05:43.758 "nvmf_ns_remove_host", 00:05:43.758 "nvmf_ns_add_host", 00:05:43.758 "nvmf_subsystem_remove_ns", 00:05:43.758 "nvmf_subsystem_set_ns_ana_group", 00:05:43.758 "nvmf_subsystem_add_ns", 00:05:43.758 "nvmf_subsystem_listener_set_ana_state", 00:05:43.758 "nvmf_discovery_get_referrals", 00:05:43.758 "nvmf_discovery_remove_referral", 00:05:43.758 "nvmf_discovery_add_referral", 00:05:43.758 "nvmf_subsystem_remove_listener", 00:05:43.758 "nvmf_subsystem_add_listener", 00:05:43.758 "nvmf_delete_subsystem", 00:05:43.758 "nvmf_create_subsystem", 00:05:43.758 "nvmf_get_subsystems", 00:05:43.758 "env_dpdk_get_mem_stats", 00:05:43.758 "nbd_get_disks", 00:05:43.758 "nbd_stop_disk", 00:05:43.758 "nbd_start_disk", 00:05:43.758 "ublk_recover_disk", 00:05:43.758 "ublk_get_disks", 00:05:43.758 "ublk_stop_disk", 00:05:43.758 "ublk_start_disk", 00:05:43.758 "ublk_destroy_target", 00:05:43.758 "ublk_create_target", 00:05:43.758 "virtio_blk_create_transport", 00:05:43.758 "virtio_blk_get_transports", 00:05:43.758 "vhost_controller_set_coalescing", 00:05:43.758 "vhost_get_controllers", 00:05:43.758 "vhost_delete_controller", 00:05:43.758 "vhost_create_blk_controller", 00:05:43.758 "vhost_scsi_controller_remove_target", 00:05:43.758 "vhost_scsi_controller_add_target", 00:05:43.758 "vhost_start_scsi_controller", 00:05:43.758 "vhost_create_scsi_controller", 00:05:43.758 "thread_set_cpumask", 00:05:43.758 "scheduler_set_options", 00:05:43.758 "framework_get_governor", 00:05:43.758 "framework_get_scheduler", 00:05:43.758 "framework_set_scheduler", 00:05:43.758 "framework_get_reactors", 00:05:43.758 "thread_get_io_channels", 00:05:43.758 "thread_get_pollers", 00:05:43.758 "thread_get_stats", 00:05:43.758 "framework_monitor_context_switch", 00:05:43.758 "spdk_kill_instance", 00:05:43.758 "log_enable_timestamps", 00:05:43.758 "log_get_flags", 00:05:43.758 "log_clear_flag", 00:05:43.758 "log_set_flag", 00:05:43.758 "log_get_level", 00:05:43.758 "log_set_level", 00:05:43.758 "log_get_print_level", 00:05:43.758 "log_set_print_level", 00:05:43.758 "framework_enable_cpumask_locks", 00:05:43.758 "framework_disable_cpumask_locks", 00:05:43.758 "framework_wait_init", 00:05:43.758 "framework_start_init", 00:05:43.758 "scsi_get_devices", 00:05:43.758 "bdev_get_histogram", 00:05:43.758 "bdev_enable_histogram", 00:05:43.758 "bdev_set_qos_limit", 00:05:43.758 "bdev_set_qd_sampling_period", 00:05:43.758 "bdev_get_bdevs", 00:05:43.758 "bdev_reset_iostat", 00:05:43.758 "bdev_get_iostat", 00:05:43.758 "bdev_examine", 00:05:43.758 "bdev_wait_for_examine", 00:05:43.758 "bdev_set_options", 00:05:43.758 "accel_get_stats", 00:05:43.758 "accel_set_options", 00:05:43.758 "accel_set_driver", 00:05:43.758 "accel_crypto_key_destroy", 00:05:43.758 "accel_crypto_keys_get", 00:05:43.758 "accel_crypto_key_create", 00:05:43.758 "accel_assign_opc", 00:05:43.758 "accel_get_module_info", 00:05:43.758 "accel_get_opc_assignments", 00:05:43.758 "vmd_rescan", 00:05:43.758 "vmd_remove_device", 00:05:43.758 "vmd_enable", 00:05:43.758 "sock_get_default_impl", 00:05:43.758 "sock_set_default_impl", 00:05:43.758 "sock_impl_set_options", 00:05:43.758 "sock_impl_get_options", 00:05:43.758 "iobuf_get_stats", 00:05:43.758 "iobuf_set_options", 00:05:43.758 "keyring_get_keys", 00:05:43.758 "vfu_tgt_set_base_path", 00:05:43.758 "framework_get_pci_devices", 00:05:43.758 "framework_get_config", 00:05:43.758 "framework_get_subsystems", 00:05:43.758 "fsdev_set_opts", 00:05:43.759 "fsdev_get_opts", 00:05:43.759 "trace_get_info", 00:05:43.759 "trace_get_tpoint_group_mask", 00:05:43.759 "trace_disable_tpoint_group", 00:05:43.759 "trace_enable_tpoint_group", 00:05:43.759 "trace_clear_tpoint_mask", 00:05:43.759 "trace_set_tpoint_mask", 00:05:43.759 "notify_get_notifications", 00:05:43.759 "notify_get_types", 00:05:43.759 "spdk_get_version", 00:05:43.759 "rpc_get_methods" 00:05:43.759 ] 00:05:43.759 05:05:57 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:43.759 05:05:57 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:43.759 05:05:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:43.759 05:05:57 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:43.759 05:05:57 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 110591 00:05:43.759 05:05:57 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 110591 ']' 00:05:43.759 05:05:57 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 110591 00:05:43.759 05:05:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:05:43.759 05:05:57 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:43.759 05:05:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110591 00:05:44.017 05:05:57 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:44.017 05:05:57 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:44.017 05:05:57 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110591' 00:05:44.017 killing process with pid 110591 00:05:44.017 05:05:57 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 110591 00:05:44.017 05:05:57 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 110591 00:05:44.276 00:05:44.276 real 0m1.116s 00:05:44.276 user 0m1.901s 00:05:44.276 sys 0m0.440s 00:05:44.276 05:05:57 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.276 05:05:57 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:44.276 ************************************ 00:05:44.276 END TEST spdkcli_tcp 00:05:44.276 ************************************ 00:05:44.276 05:05:57 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.276 05:05:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:44.276 05:05:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.276 05:05:57 -- common/autotest_common.sh@10 -- # set +x 00:05:44.276 ************************************ 00:05:44.276 START TEST dpdk_mem_utility 00:05:44.276 ************************************ 00:05:44.276 05:05:57 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:44.277 * Looking for test storage... 00:05:44.277 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:44.277 05:05:57 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:44.277 05:05:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:05:44.277 05:05:57 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:44.536 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:44.536 05:05:58 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:05:44.536 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:44.536 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:44.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.536 --rc genhtml_branch_coverage=1 00:05:44.536 --rc genhtml_function_coverage=1 00:05:44.536 --rc genhtml_legend=1 00:05:44.536 --rc geninfo_all_blocks=1 00:05:44.536 --rc geninfo_unexecuted_blocks=1 00:05:44.536 00:05:44.536 ' 00:05:44.536 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:44.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.536 --rc genhtml_branch_coverage=1 00:05:44.536 --rc genhtml_function_coverage=1 00:05:44.536 --rc genhtml_legend=1 00:05:44.536 --rc geninfo_all_blocks=1 00:05:44.536 --rc geninfo_unexecuted_blocks=1 00:05:44.536 00:05:44.536 ' 00:05:44.536 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:44.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.536 --rc genhtml_branch_coverage=1 00:05:44.536 --rc genhtml_function_coverage=1 00:05:44.536 --rc genhtml_legend=1 00:05:44.536 --rc geninfo_all_blocks=1 00:05:44.536 --rc geninfo_unexecuted_blocks=1 00:05:44.536 00:05:44.536 ' 00:05:44.536 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:44.536 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:44.536 --rc genhtml_branch_coverage=1 00:05:44.536 --rc genhtml_function_coverage=1 00:05:44.536 --rc genhtml_legend=1 00:05:44.536 --rc geninfo_all_blocks=1 00:05:44.536 --rc geninfo_unexecuted_blocks=1 00:05:44.536 00:05:44.536 ' 00:05:44.536 05:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:44.536 05:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=110772 00:05:44.536 05:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 110772 00:05:44.536 05:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:44.536 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 110772 ']' 00:05:44.536 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.536 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.536 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.536 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.536 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.536 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.536 [2024-12-15 05:05:58.070875] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:44.536 [2024-12-15 05:05:58.070920] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid110772 ] 00:05:44.536 [2024-12-15 05:05:58.145953] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.536 [2024-12-15 05:05:58.169010] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:44.796 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.796 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:05:44.796 05:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:44.796 05:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:44.796 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.796 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:44.796 { 00:05:44.796 "filename": "/tmp/spdk_mem_dump.txt" 00:05:44.796 } 00:05:44.796 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.796 05:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:44.796 DPDK memory size 818.000000 MiB in 1 heap(s) 00:05:44.796 1 heaps totaling size 818.000000 MiB 00:05:44.796 size: 818.000000 MiB heap id: 0 00:05:44.796 end heaps---------- 00:05:44.796 9 mempools totaling size 603.782043 MiB 00:05:44.796 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:44.796 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:44.796 size: 100.555481 MiB name: bdev_io_110772 00:05:44.796 size: 50.003479 MiB name: msgpool_110772 00:05:44.796 size: 36.509338 MiB name: fsdev_io_110772 00:05:44.796 size: 21.763794 MiB name: PDU_Pool 00:05:44.796 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:44.796 size: 4.133484 MiB name: evtpool_110772 00:05:44.796 size: 0.026123 MiB name: Session_Pool 00:05:44.796 end mempools------- 00:05:44.796 6 memzones totaling size 4.142822 MiB 00:05:44.796 size: 1.000366 MiB name: RG_ring_0_110772 00:05:44.796 size: 1.000366 MiB name: RG_ring_1_110772 00:05:44.796 size: 1.000366 MiB name: RG_ring_4_110772 00:05:44.796 size: 1.000366 MiB name: RG_ring_5_110772 00:05:44.796 size: 0.125366 MiB name: RG_ring_2_110772 00:05:44.796 size: 0.015991 MiB name: RG_ring_3_110772 00:05:44.796 end memzones------- 00:05:44.796 05:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:44.796 heap id: 0 total size: 818.000000 MiB number of busy elements: 44 number of free elements: 15 00:05:44.796 list of free elements. size: 10.852478 MiB 00:05:44.796 element at address: 0x200019200000 with size: 0.999878 MiB 00:05:44.796 element at address: 0x200019400000 with size: 0.999878 MiB 00:05:44.796 element at address: 0x200000400000 with size: 0.998535 MiB 00:05:44.796 element at address: 0x200032000000 with size: 0.994446 MiB 00:05:44.796 element at address: 0x200006400000 with size: 0.959839 MiB 00:05:44.796 element at address: 0x200012c00000 with size: 0.944275 MiB 00:05:44.796 element at address: 0x200019600000 with size: 0.936584 MiB 00:05:44.796 element at address: 0x200000200000 with size: 0.717346 MiB 00:05:44.796 element at address: 0x20001ae00000 with size: 0.582886 MiB 00:05:44.796 element at address: 0x200000c00000 with size: 0.495422 MiB 00:05:44.796 element at address: 0x20000a600000 with size: 0.490723 MiB 00:05:44.797 element at address: 0x200019800000 with size: 0.485657 MiB 00:05:44.797 element at address: 0x200003e00000 with size: 0.481934 MiB 00:05:44.797 element at address: 0x200028200000 with size: 0.410034 MiB 00:05:44.797 element at address: 0x200000800000 with size: 0.355042 MiB 00:05:44.797 list of standard malloc elements. size: 199.218628 MiB 00:05:44.797 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:05:44.797 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:05:44.797 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:44.797 element at address: 0x2000194fff80 with size: 1.000122 MiB 00:05:44.797 element at address: 0x2000196fff80 with size: 1.000122 MiB 00:05:44.797 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:44.797 element at address: 0x2000196eff00 with size: 0.062622 MiB 00:05:44.797 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:44.797 element at address: 0x2000196efdc0 with size: 0.000305 MiB 00:05:44.797 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:44.797 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:44.797 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:05:44.797 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:05:44.797 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:05:44.797 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:05:44.797 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:05:44.797 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:05:44.797 element at address: 0x20000085b040 with size: 0.000183 MiB 00:05:44.797 element at address: 0x20000085f300 with size: 0.000183 MiB 00:05:44.797 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:05:44.797 element at address: 0x20000087f680 with size: 0.000183 MiB 00:05:44.797 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:05:44.797 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:05:44.797 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:05:44.797 element at address: 0x200000cff000 with size: 0.000183 MiB 00:05:44.797 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:05:44.797 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:05:44.797 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:05:44.797 element at address: 0x200003efb980 with size: 0.000183 MiB 00:05:44.797 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:05:44.797 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:05:44.797 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:05:44.797 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:05:44.797 element at address: 0x200012cf1bc0 with size: 0.000183 MiB 00:05:44.797 element at address: 0x2000196efc40 with size: 0.000183 MiB 00:05:44.797 element at address: 0x2000196efd00 with size: 0.000183 MiB 00:05:44.797 element at address: 0x2000198bc740 with size: 0.000183 MiB 00:05:44.797 element at address: 0x20001ae95380 with size: 0.000183 MiB 00:05:44.797 element at address: 0x20001ae95440 with size: 0.000183 MiB 00:05:44.797 element at address: 0x200028268f80 with size: 0.000183 MiB 00:05:44.797 element at address: 0x200028269040 with size: 0.000183 MiB 00:05:44.797 element at address: 0x20002826fc40 with size: 0.000183 MiB 00:05:44.797 element at address: 0x20002826fe40 with size: 0.000183 MiB 00:05:44.797 element at address: 0x20002826ff00 with size: 0.000183 MiB 00:05:44.797 list of memzone associated elements. size: 607.928894 MiB 00:05:44.797 element at address: 0x20001ae95500 with size: 211.416748 MiB 00:05:44.797 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:44.797 element at address: 0x20002826ffc0 with size: 157.562561 MiB 00:05:44.797 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:44.797 element at address: 0x200012df1e80 with size: 100.055054 MiB 00:05:44.797 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_110772_0 00:05:44.797 element at address: 0x200000dff380 with size: 48.003052 MiB 00:05:44.797 associated memzone info: size: 48.002930 MiB name: MP_msgpool_110772_0 00:05:44.797 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:05:44.797 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_110772_0 00:05:44.797 element at address: 0x2000199be940 with size: 20.255554 MiB 00:05:44.797 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:44.797 element at address: 0x2000321feb40 with size: 18.005066 MiB 00:05:44.797 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:44.797 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:05:44.797 associated memzone info: size: 3.000122 MiB name: MP_evtpool_110772_0 00:05:44.797 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:05:44.797 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_110772 00:05:44.797 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:44.797 associated memzone info: size: 1.007996 MiB name: MP_evtpool_110772 00:05:44.797 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:05:44.797 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:44.797 element at address: 0x2000198bc800 with size: 1.008118 MiB 00:05:44.797 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:44.797 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:05:44.797 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:44.797 element at address: 0x200003efba40 with size: 1.008118 MiB 00:05:44.797 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:44.797 element at address: 0x200000cff180 with size: 1.000488 MiB 00:05:44.797 associated memzone info: size: 1.000366 MiB name: RG_ring_0_110772 00:05:44.797 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:05:44.797 associated memzone info: size: 1.000366 MiB name: RG_ring_1_110772 00:05:44.797 element at address: 0x200012cf1c80 with size: 1.000488 MiB 00:05:44.797 associated memzone info: size: 1.000366 MiB name: RG_ring_4_110772 00:05:44.797 element at address: 0x2000320fe940 with size: 1.000488 MiB 00:05:44.797 associated memzone info: size: 1.000366 MiB name: RG_ring_5_110772 00:05:44.797 element at address: 0x20000087f740 with size: 0.500488 MiB 00:05:44.797 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_110772 00:05:44.797 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:05:44.797 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_110772 00:05:44.797 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:05:44.797 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:44.797 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:05:44.797 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:44.797 element at address: 0x20001987c540 with size: 0.250488 MiB 00:05:44.797 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:44.797 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:05:44.797 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_110772 00:05:44.797 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:05:44.797 associated memzone info: size: 0.125366 MiB name: RG_ring_2_110772 00:05:44.797 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:05:44.797 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:44.797 element at address: 0x200028269100 with size: 0.023743 MiB 00:05:44.797 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:44.797 element at address: 0x20000085b100 with size: 0.016113 MiB 00:05:44.797 associated memzone info: size: 0.015991 MiB name: RG_ring_3_110772 00:05:44.797 element at address: 0x20002826f240 with size: 0.002441 MiB 00:05:44.797 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:44.797 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:05:44.797 associated memzone info: size: 0.000183 MiB name: MP_msgpool_110772 00:05:44.797 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:05:44.797 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_110772 00:05:44.797 element at address: 0x20000085af00 with size: 0.000305 MiB 00:05:44.797 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_110772 00:05:44.797 element at address: 0x20002826fd00 with size: 0.000305 MiB 00:05:44.797 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:44.797 05:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:44.797 05:05:58 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 110772 00:05:44.797 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 110772 ']' 00:05:44.797 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 110772 00:05:44.797 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:05:45.056 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:45.056 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 110772 00:05:45.056 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:45.056 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:45.056 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 110772' 00:05:45.056 killing process with pid 110772 00:05:45.056 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 110772 00:05:45.056 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 110772 00:05:45.315 00:05:45.315 real 0m0.970s 00:05:45.315 user 0m0.921s 00:05:45.315 sys 0m0.394s 00:05:45.315 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.315 05:05:58 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:45.315 ************************************ 00:05:45.315 END TEST dpdk_mem_utility 00:05:45.315 ************************************ 00:05:45.315 05:05:58 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:45.315 05:05:58 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.315 05:05:58 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.315 05:05:58 -- common/autotest_common.sh@10 -- # set +x 00:05:45.315 ************************************ 00:05:45.315 START TEST event 00:05:45.315 ************************************ 00:05:45.315 05:05:58 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:45.315 * Looking for test storage... 00:05:45.315 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:45.315 05:05:58 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.315 05:05:58 event -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.315 05:05:58 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.574 05:05:59 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.574 05:05:59 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.575 05:05:59 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.575 05:05:59 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.575 05:05:59 event -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.575 05:05:59 event -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.575 05:05:59 event -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.575 05:05:59 event -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.575 05:05:59 event -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.575 05:05:59 event -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.575 05:05:59 event -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.575 05:05:59 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.575 05:05:59 event -- scripts/common.sh@344 -- # case "$op" in 00:05:45.575 05:05:59 event -- scripts/common.sh@345 -- # : 1 00:05:45.575 05:05:59 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.575 05:05:59 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.575 05:05:59 event -- scripts/common.sh@365 -- # decimal 1 00:05:45.575 05:05:59 event -- scripts/common.sh@353 -- # local d=1 00:05:45.575 05:05:59 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.575 05:05:59 event -- scripts/common.sh@355 -- # echo 1 00:05:45.575 05:05:59 event -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.575 05:05:59 event -- scripts/common.sh@366 -- # decimal 2 00:05:45.575 05:05:59 event -- scripts/common.sh@353 -- # local d=2 00:05:45.575 05:05:59 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.575 05:05:59 event -- scripts/common.sh@355 -- # echo 2 00:05:45.575 05:05:59 event -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.575 05:05:59 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.575 05:05:59 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.575 05:05:59 event -- scripts/common.sh@368 -- # return 0 00:05:45.575 05:05:59 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.575 05:05:59 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.575 --rc genhtml_branch_coverage=1 00:05:45.575 --rc genhtml_function_coverage=1 00:05:45.575 --rc genhtml_legend=1 00:05:45.575 --rc geninfo_all_blocks=1 00:05:45.575 --rc geninfo_unexecuted_blocks=1 00:05:45.575 00:05:45.575 ' 00:05:45.575 05:05:59 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.575 --rc genhtml_branch_coverage=1 00:05:45.575 --rc genhtml_function_coverage=1 00:05:45.575 --rc genhtml_legend=1 00:05:45.575 --rc geninfo_all_blocks=1 00:05:45.575 --rc geninfo_unexecuted_blocks=1 00:05:45.575 00:05:45.575 ' 00:05:45.575 05:05:59 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.575 --rc genhtml_branch_coverage=1 00:05:45.575 --rc genhtml_function_coverage=1 00:05:45.575 --rc genhtml_legend=1 00:05:45.575 --rc geninfo_all_blocks=1 00:05:45.575 --rc geninfo_unexecuted_blocks=1 00:05:45.575 00:05:45.575 ' 00:05:45.575 05:05:59 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.575 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.575 --rc genhtml_branch_coverage=1 00:05:45.575 --rc genhtml_function_coverage=1 00:05:45.575 --rc genhtml_legend=1 00:05:45.575 --rc geninfo_all_blocks=1 00:05:45.575 --rc geninfo_unexecuted_blocks=1 00:05:45.575 00:05:45.575 ' 00:05:45.575 05:05:59 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:45.575 05:05:59 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:45.575 05:05:59 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.575 05:05:59 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:05:45.575 05:05:59 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.575 05:05:59 event -- common/autotest_common.sh@10 -- # set +x 00:05:45.575 ************************************ 00:05:45.575 START TEST event_perf 00:05:45.575 ************************************ 00:05:45.575 05:05:59 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:45.575 Running I/O for 1 seconds...[2024-12-15 05:05:59.118620] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:45.575 [2024-12-15 05:05:59.118687] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111045 ] 00:05:45.575 [2024-12-15 05:05:59.195599] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:45.575 [2024-12-15 05:05:59.221254] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:45.575 [2024-12-15 05:05:59.221362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:45.575 [2024-12-15 05:05:59.221469] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.575 Running I/O for 1 seconds...[2024-12-15 05:05:59.221471] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:46.950 00:05:46.950 lcore 0: 203954 00:05:46.950 lcore 1: 203952 00:05:46.950 lcore 2: 203952 00:05:46.950 lcore 3: 203953 00:05:46.950 done. 00:05:46.950 00:05:46.950 real 0m1.161s 00:05:46.950 user 0m4.079s 00:05:46.950 sys 0m0.078s 00:05:46.950 05:06:00 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:46.950 05:06:00 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:46.950 ************************************ 00:05:46.950 END TEST event_perf 00:05:46.950 ************************************ 00:05:46.950 05:06:00 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:46.950 05:06:00 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:46.950 05:06:00 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:46.950 05:06:00 event -- common/autotest_common.sh@10 -- # set +x 00:05:46.950 ************************************ 00:05:46.950 START TEST event_reactor 00:05:46.950 ************************************ 00:05:46.950 05:06:00 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:46.950 [2024-12-15 05:06:00.346079] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:46.950 [2024-12-15 05:06:00.346150] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111321 ] 00:05:46.950 [2024-12-15 05:06:00.425024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.950 [2024-12-15 05:06:00.446846] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.886 test_start 00:05:47.886 oneshot 00:05:47.886 tick 100 00:05:47.886 tick 100 00:05:47.886 tick 250 00:05:47.886 tick 100 00:05:47.886 tick 100 00:05:47.886 tick 100 00:05:47.886 tick 250 00:05:47.886 tick 500 00:05:47.886 tick 100 00:05:47.886 tick 100 00:05:47.886 tick 250 00:05:47.886 tick 100 00:05:47.886 tick 100 00:05:47.886 test_end 00:05:47.886 00:05:47.886 real 0m1.152s 00:05:47.886 user 0m1.067s 00:05:47.886 sys 0m0.080s 00:05:47.886 05:06:01 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:47.886 05:06:01 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:47.886 ************************************ 00:05:47.886 END TEST event_reactor 00:05:47.886 ************************************ 00:05:47.886 05:06:01 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.886 05:06:01 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:05:47.886 05:06:01 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:47.886 05:06:01 event -- common/autotest_common.sh@10 -- # set +x 00:05:47.886 ************************************ 00:05:47.886 START TEST event_reactor_perf 00:05:47.886 ************************************ 00:05:47.886 05:06:01 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:47.886 [2024-12-15 05:06:01.568628] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:47.886 [2024-12-15 05:06:01.568696] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111645 ] 00:05:48.145 [2024-12-15 05:06:01.647284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:48.145 [2024-12-15 05:06:01.669167] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.082 test_start 00:05:49.082 test_end 00:05:49.082 Performance: 512979 events per second 00:05:49.082 00:05:49.082 real 0m1.152s 00:05:49.082 user 0m1.064s 00:05:49.082 sys 0m0.082s 00:05:49.082 05:06:02 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.082 05:06:02 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:49.082 ************************************ 00:05:49.082 END TEST event_reactor_perf 00:05:49.082 ************************************ 00:05:49.082 05:06:02 event -- event/event.sh@49 -- # uname -s 00:05:49.082 05:06:02 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:49.082 05:06:02 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:49.082 05:06:02 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.082 05:06:02 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.082 05:06:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:49.341 ************************************ 00:05:49.341 START TEST event_scheduler 00:05:49.341 ************************************ 00:05:49.341 05:06:02 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:49.341 * Looking for test storage... 00:05:49.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:49.341 05:06:02 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:49.341 05:06:02 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:05:49.341 05:06:02 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:49.342 05:06:02 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.342 05:06:02 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:05:49.342 05:06:02 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.342 05:06:02 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:49.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.342 --rc genhtml_branch_coverage=1 00:05:49.342 --rc genhtml_function_coverage=1 00:05:49.342 --rc genhtml_legend=1 00:05:49.342 --rc geninfo_all_blocks=1 00:05:49.342 --rc geninfo_unexecuted_blocks=1 00:05:49.342 00:05:49.342 ' 00:05:49.342 05:06:02 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:49.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.342 --rc genhtml_branch_coverage=1 00:05:49.342 --rc genhtml_function_coverage=1 00:05:49.342 --rc genhtml_legend=1 00:05:49.342 --rc geninfo_all_blocks=1 00:05:49.342 --rc geninfo_unexecuted_blocks=1 00:05:49.342 00:05:49.342 ' 00:05:49.342 05:06:02 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:49.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.342 --rc genhtml_branch_coverage=1 00:05:49.342 --rc genhtml_function_coverage=1 00:05:49.342 --rc genhtml_legend=1 00:05:49.342 --rc geninfo_all_blocks=1 00:05:49.342 --rc geninfo_unexecuted_blocks=1 00:05:49.342 00:05:49.342 ' 00:05:49.342 05:06:02 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:49.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.342 --rc genhtml_branch_coverage=1 00:05:49.342 --rc genhtml_function_coverage=1 00:05:49.342 --rc genhtml_legend=1 00:05:49.342 --rc geninfo_all_blocks=1 00:05:49.342 --rc geninfo_unexecuted_blocks=1 00:05:49.342 00:05:49.342 ' 00:05:49.342 05:06:02 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:49.342 05:06:02 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=111943 00:05:49.342 05:06:02 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:49.342 05:06:02 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:49.342 05:06:02 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 111943 00:05:49.342 05:06:02 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 111943 ']' 00:05:49.342 05:06:02 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.342 05:06:02 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:49.342 05:06:02 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.342 05:06:02 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:49.342 05:06:02 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.342 [2024-12-15 05:06:02.997169] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:49.342 [2024-12-15 05:06:02.997218] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid111943 ] 00:05:49.602 [2024-12-15 05:06:03.072378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:49.602 [2024-12-15 05:06:03.098622] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.602 [2024-12-15 05:06:03.098731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:49.602 [2024-12-15 05:06:03.098816] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:49.602 [2024-12-15 05:06:03.098818] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:49.602 05:06:03 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.602 05:06:03 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:05:49.602 05:06:03 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:49.602 05:06:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.602 05:06:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.602 [2024-12-15 05:06:03.147391] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:49.602 [2024-12-15 05:06:03.147407] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:05:49.602 [2024-12-15 05:06:03.147416] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:49.602 [2024-12-15 05:06:03.147421] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:49.602 [2024-12-15 05:06:03.147426] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:49.602 05:06:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.602 05:06:03 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:49.602 05:06:03 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.602 05:06:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.602 [2024-12-15 05:06:03.217358] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:49.602 05:06:03 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.602 05:06:03 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:49.602 05:06:03 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:49.602 05:06:03 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.602 05:06:03 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:49.602 ************************************ 00:05:49.602 START TEST scheduler_create_thread 00:05:49.602 ************************************ 00:05:49.602 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:05:49.602 05:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:49.602 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.602 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.602 2 00:05:49.602 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.602 05:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:49.602 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.602 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.602 3 00:05:49.602 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.602 05:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:49.602 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.602 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.861 4 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.861 5 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.861 6 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.861 7 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.861 8 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.861 9 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.861 10 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.861 05:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:49.862 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.862 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:49.862 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.862 05:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:49.862 05:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:49.862 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.862 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:50.429 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.429 05:06:03 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:50.429 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.429 05:06:03 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:51.806 05:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.806 05:06:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:51.806 05:06:05 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:51.806 05:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:51.806 05:06:05 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.741 05:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.741 00:05:52.741 real 0m3.102s 00:05:52.741 user 0m0.023s 00:05:52.741 sys 0m0.007s 00:05:52.741 05:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.741 05:06:06 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:52.741 ************************************ 00:05:52.741 END TEST scheduler_create_thread 00:05:52.741 ************************************ 00:05:52.741 05:06:06 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:52.741 05:06:06 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 111943 00:05:52.741 05:06:06 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 111943 ']' 00:05:52.741 05:06:06 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 111943 00:05:52.741 05:06:06 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:52.741 05:06:06 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:52.741 05:06:06 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 111943 00:05:53.000 05:06:06 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:53.000 05:06:06 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:53.000 05:06:06 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 111943' 00:05:53.000 killing process with pid 111943 00:05:53.000 05:06:06 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 111943 00:05:53.000 05:06:06 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 111943 00:05:53.259 [2024-12-15 05:06:06.736474] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:53.259 00:05:53.259 real 0m4.148s 00:05:53.259 user 0m6.646s 00:05:53.259 sys 0m0.374s 00:05:53.259 05:06:06 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:53.259 05:06:06 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:53.259 ************************************ 00:05:53.259 END TEST event_scheduler 00:05:53.259 ************************************ 00:05:53.518 05:06:06 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:53.518 05:06:06 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:53.518 05:06:06 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:53.518 05:06:06 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:53.518 05:06:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:53.518 ************************************ 00:05:53.518 START TEST app_repeat 00:05:53.518 ************************************ 00:05:53.518 05:06:07 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@19 -- # repeat_pid=112948 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 112948' 00:05:53.518 Process app_repeat pid: 112948 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:53.518 spdk_app_start Round 0 00:05:53.518 05:06:07 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112948 /var/tmp/spdk-nbd.sock 00:05:53.518 05:06:07 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112948 ']' 00:05:53.518 05:06:07 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:53.518 05:06:07 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:53.518 05:06:07 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:53.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:53.518 05:06:07 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:53.518 05:06:07 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:53.518 [2024-12-15 05:06:07.036884] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:05:53.518 [2024-12-15 05:06:07.036934] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid112948 ] 00:05:53.518 [2024-12-15 05:06:07.114712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:53.518 [2024-12-15 05:06:07.138624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:53.518 [2024-12-15 05:06:07.138633] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.777 05:06:07 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.777 05:06:07 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:53.777 05:06:07 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:53.777 Malloc0 00:05:53.777 05:06:07 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:54.036 Malloc1 00:05:54.036 05:06:07 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.036 05:06:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:54.295 /dev/nbd0 00:05:54.295 05:06:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:54.295 05:06:07 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.295 1+0 records in 00:05:54.295 1+0 records out 00:05:54.295 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000183318 s, 22.3 MB/s 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.295 05:06:07 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.295 05:06:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.295 05:06:07 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.295 05:06:07 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:54.554 /dev/nbd1 00:05:54.554 05:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:54.554 05:06:08 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:54.554 1+0 records in 00:05:54.554 1+0 records out 00:05:54.554 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000239573 s, 17.1 MB/s 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:54.554 05:06:08 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:54.554 05:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:54.554 05:06:08 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:54.554 05:06:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:54.554 05:06:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.554 05:06:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:54.813 { 00:05:54.813 "nbd_device": "/dev/nbd0", 00:05:54.813 "bdev_name": "Malloc0" 00:05:54.813 }, 00:05:54.813 { 00:05:54.813 "nbd_device": "/dev/nbd1", 00:05:54.813 "bdev_name": "Malloc1" 00:05:54.813 } 00:05:54.813 ]' 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:54.813 { 00:05:54.813 "nbd_device": "/dev/nbd0", 00:05:54.813 "bdev_name": "Malloc0" 00:05:54.813 }, 00:05:54.813 { 00:05:54.813 "nbd_device": "/dev/nbd1", 00:05:54.813 "bdev_name": "Malloc1" 00:05:54.813 } 00:05:54.813 ]' 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:54.813 /dev/nbd1' 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:54.813 /dev/nbd1' 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:54.813 256+0 records in 00:05:54.813 256+0 records out 00:05:54.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0101976 s, 103 MB/s 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:54.813 256+0 records in 00:05:54.813 256+0 records out 00:05:54.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0138223 s, 75.9 MB/s 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:54.813 256+0 records in 00:05:54.813 256+0 records out 00:05:54.813 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150604 s, 69.6 MB/s 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:54.813 05:06:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:55.072 05:06:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:55.072 05:06:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:55.072 05:06:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:55.072 05:06:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.072 05:06:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.072 05:06:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:55.072 05:06:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.072 05:06:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.072 05:06:08 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:55.072 05:06:08 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:55.331 05:06:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:55.331 05:06:08 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:55.331 05:06:08 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:55.331 05:06:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:55.331 05:06:08 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:55.331 05:06:08 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:55.331 05:06:08 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:55.331 05:06:08 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:55.331 05:06:08 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:55.331 05:06:08 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:55.331 05:06:08 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:55.590 05:06:09 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:55.590 05:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:55.590 05:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:55.590 05:06:09 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:55.590 05:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:55.590 05:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:55.590 05:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:55.590 05:06:09 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:55.590 05:06:09 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:55.590 05:06:09 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:55.590 05:06:09 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:55.590 05:06:09 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:55.590 05:06:09 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:55.850 05:06:09 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:56.109 [2024-12-15 05:06:09.546898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:56.109 [2024-12-15 05:06:09.567326] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.109 [2024-12-15 05:06:09.567327] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.109 [2024-12-15 05:06:09.607698] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:56.109 [2024-12-15 05:06:09.607737] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:59.398 05:06:12 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:59.399 05:06:12 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:59.399 spdk_app_start Round 1 00:05:59.399 05:06:12 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112948 /var/tmp/spdk-nbd.sock 00:05:59.399 05:06:12 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112948 ']' 00:05:59.399 05:06:12 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:59.399 05:06:12 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:59.399 05:06:12 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:59.399 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:59.399 05:06:12 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:59.399 05:06:12 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:59.399 05:06:12 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:59.399 05:06:12 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:59.399 05:06:12 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.399 Malloc0 00:05:59.399 05:06:12 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:59.399 Malloc1 00:05:59.399 05:06:13 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.399 05:06:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:59.658 /dev/nbd0 00:05:59.658 05:06:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:59.658 05:06:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.658 1+0 records in 00:05:59.658 1+0 records out 00:05:59.658 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230168 s, 17.8 MB/s 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.658 05:06:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.658 05:06:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.658 05:06:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.658 05:06:13 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:59.916 /dev/nbd1 00:05:59.916 05:06:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.916 05:06:13 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.916 05:06:13 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:59.916 05:06:13 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:59.916 05:06:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.916 05:06:13 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.916 05:06:13 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:59.917 05:06:13 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:59.917 05:06:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.917 05:06:13 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.917 05:06:13 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:59.917 1+0 records in 00:05:59.917 1+0 records out 00:05:59.917 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000158715 s, 25.8 MB/s 00:05:59.917 05:06:13 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.917 05:06:13 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:59.917 05:06:13 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:59.917 05:06:13 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.917 05:06:13 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:59.917 05:06:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.917 05:06:13 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:59.917 05:06:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:59.917 05:06:13 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:59.917 05:06:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.176 { 00:06:00.176 "nbd_device": "/dev/nbd0", 00:06:00.176 "bdev_name": "Malloc0" 00:06:00.176 }, 00:06:00.176 { 00:06:00.176 "nbd_device": "/dev/nbd1", 00:06:00.176 "bdev_name": "Malloc1" 00:06:00.176 } 00:06:00.176 ]' 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.176 { 00:06:00.176 "nbd_device": "/dev/nbd0", 00:06:00.176 "bdev_name": "Malloc0" 00:06:00.176 }, 00:06:00.176 { 00:06:00.176 "nbd_device": "/dev/nbd1", 00:06:00.176 "bdev_name": "Malloc1" 00:06:00.176 } 00:06:00.176 ]' 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.176 /dev/nbd1' 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.176 /dev/nbd1' 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:00.176 256+0 records in 00:06:00.176 256+0 records out 00:06:00.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106577 s, 98.4 MB/s 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.176 256+0 records in 00:06:00.176 256+0 records out 00:06:00.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141696 s, 74.0 MB/s 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:00.176 256+0 records in 00:06:00.176 256+0 records out 00:06:00.176 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151833 s, 69.1 MB/s 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:00.176 05:06:13 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.435 05:06:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:00.435 05:06:13 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:00.435 05:06:13 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:00.435 05:06:13 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.435 05:06:13 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:00.435 05:06:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:00.435 05:06:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:00.435 05:06:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:00.435 05:06:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.435 05:06:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.435 05:06:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:00.435 05:06:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.435 05:06:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.435 05:06:14 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:00.435 05:06:14 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:00.694 05:06:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:00.694 05:06:14 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:00.694 05:06:14 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:00.694 05:06:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:00.694 05:06:14 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:00.694 05:06:14 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:00.694 05:06:14 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:00.694 05:06:14 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:00.694 05:06:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.695 05:06:14 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.695 05:06:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.953 05:06:14 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:00.953 05:06:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:00.953 05:06:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.953 05:06:14 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:00.953 05:06:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:00.953 05:06:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.953 05:06:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:00.953 05:06:14 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:00.954 05:06:14 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:00.954 05:06:14 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:00.954 05:06:14 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:00.954 05:06:14 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:00.954 05:06:14 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:01.213 05:06:14 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:01.472 [2024-12-15 05:06:14.900977] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:01.472 [2024-12-15 05:06:14.920896] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:01.472 [2024-12-15 05:06:14.920897] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:01.472 [2024-12-15 05:06:14.962071] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:01.472 [2024-12-15 05:06:14.962111] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:04.757 05:06:17 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:06:04.757 05:06:17 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:06:04.757 spdk_app_start Round 2 00:06:04.757 05:06:17 event.app_repeat -- event/event.sh@25 -- # waitforlisten 112948 /var/tmp/spdk-nbd.sock 00:06:04.757 05:06:17 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112948 ']' 00:06:04.757 05:06:17 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:04.757 05:06:17 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:04.757 05:06:17 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:04.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:04.757 05:06:17 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:04.757 05:06:17 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:04.757 05:06:17 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:04.757 05:06:17 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:04.757 05:06:17 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.757 Malloc0 00:06:04.757 05:06:18 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:06:04.757 Malloc1 00:06:04.757 05:06:18 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:04.757 05:06:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:06:05.016 /dev/nbd0 00:06:05.016 05:06:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:05.016 05:06:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.016 1+0 records in 00:06:05.016 1+0 records out 00:06:05.016 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000246599 s, 16.6 MB/s 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.016 05:06:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.016 05:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.016 05:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.016 05:06:18 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:06:05.274 /dev/nbd1 00:06:05.274 05:06:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:05.274 05:06:18 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:05.274 05:06:18 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:05.274 05:06:18 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:06:05.274 05:06:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:05.274 05:06:18 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:05.274 05:06:18 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:05.274 05:06:18 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:06:05.275 05:06:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:05.275 05:06:18 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:05.275 05:06:18 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:06:05.275 1+0 records in 00:06:05.275 1+0 records out 00:06:05.275 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226065 s, 18.1 MB/s 00:06:05.275 05:06:18 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.275 05:06:18 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:06:05.275 05:06:18 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:06:05.275 05:06:18 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:05.275 05:06:18 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:06:05.275 05:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:05.275 05:06:18 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:06:05.275 05:06:18 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:05.275 05:06:18 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.275 05:06:18 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:05.533 { 00:06:05.533 "nbd_device": "/dev/nbd0", 00:06:05.533 "bdev_name": "Malloc0" 00:06:05.533 }, 00:06:05.533 { 00:06:05.533 "nbd_device": "/dev/nbd1", 00:06:05.533 "bdev_name": "Malloc1" 00:06:05.533 } 00:06:05.533 ]' 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:05.533 { 00:06:05.533 "nbd_device": "/dev/nbd0", 00:06:05.533 "bdev_name": "Malloc0" 00:06:05.533 }, 00:06:05.533 { 00:06:05.533 "nbd_device": "/dev/nbd1", 00:06:05.533 "bdev_name": "Malloc1" 00:06:05.533 } 00:06:05.533 ]' 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:05.533 /dev/nbd1' 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:05.533 /dev/nbd1' 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.533 05:06:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:06:05.534 256+0 records in 00:06:05.534 256+0 records out 00:06:05.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106491 s, 98.5 MB/s 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:05.534 256+0 records in 00:06:05.534 256+0 records out 00:06:05.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.014113 s, 74.3 MB/s 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:05.534 256+0 records in 00:06:05.534 256+0 records out 00:06:05.534 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150454 s, 69.7 MB/s 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.534 05:06:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:05.792 05:06:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:05.792 05:06:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:05.792 05:06:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:05.792 05:06:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:05.792 05:06:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:05.792 05:06:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:05.793 05:06:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:05.793 05:06:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:05.793 05:06:19 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:05.793 05:06:19 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:06.051 05:06:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:06.051 05:06:19 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:06.051 05:06:19 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:06.051 05:06:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:06.051 05:06:19 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:06.051 05:06:19 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:06.051 05:06:19 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:06:06.051 05:06:19 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:06:06.051 05:06:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:06.051 05:06:19 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:06.051 05:06:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:06.309 05:06:19 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:06.309 05:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:06.309 05:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:06.309 05:06:19 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:06.310 05:06:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:06:06.310 05:06:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:06.310 05:06:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:06:06.310 05:06:19 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:06:06.310 05:06:19 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:06:06.310 05:06:19 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:06:06.310 05:06:19 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:06.310 05:06:19 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:06:06.310 05:06:19 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:06:06.568 05:06:20 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:06:06.568 [2024-12-15 05:06:20.233933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:06.568 [2024-12-15 05:06:20.254030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.568 [2024-12-15 05:06:20.254030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:06.827 [2024-12-15 05:06:20.294938] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:06:06.827 [2024-12-15 05:06:20.294974] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:06:10.111 05:06:23 event.app_repeat -- event/event.sh@38 -- # waitforlisten 112948 /var/tmp/spdk-nbd.sock 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 112948 ']' 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:10.111 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:06:10.111 05:06:23 event.app_repeat -- event/event.sh@39 -- # killprocess 112948 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 112948 ']' 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 112948 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 112948 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 112948' 00:06:10.111 killing process with pid 112948 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@973 -- # kill 112948 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@978 -- # wait 112948 00:06:10.111 spdk_app_start is called in Round 0. 00:06:10.111 Shutdown signal received, stop current app iteration 00:06:10.111 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:10.111 spdk_app_start is called in Round 1. 00:06:10.111 Shutdown signal received, stop current app iteration 00:06:10.111 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:10.111 spdk_app_start is called in Round 2. 00:06:10.111 Shutdown signal received, stop current app iteration 00:06:10.111 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 reinitialization... 00:06:10.111 spdk_app_start is called in Round 3. 00:06:10.111 Shutdown signal received, stop current app iteration 00:06:10.111 05:06:23 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:06:10.111 05:06:23 event.app_repeat -- event/event.sh@42 -- # return 0 00:06:10.111 00:06:10.111 real 0m16.475s 00:06:10.111 user 0m36.456s 00:06:10.111 sys 0m2.513s 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.111 05:06:23 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:06:10.111 ************************************ 00:06:10.111 END TEST app_repeat 00:06:10.111 ************************************ 00:06:10.111 05:06:23 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:06:10.111 05:06:23 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:10.111 05:06:23 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.111 05:06:23 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.111 05:06:23 event -- common/autotest_common.sh@10 -- # set +x 00:06:10.111 ************************************ 00:06:10.111 START TEST cpu_locks 00:06:10.111 ************************************ 00:06:10.111 05:06:23 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:06:10.111 * Looking for test storage... 00:06:10.111 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:06:10.111 05:06:23 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:10.111 05:06:23 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:06:10.111 05:06:23 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:10.111 05:06:23 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.111 05:06:23 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:06:10.111 05:06:23 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.111 05:06:23 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:10.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.111 --rc genhtml_branch_coverage=1 00:06:10.111 --rc genhtml_function_coverage=1 00:06:10.111 --rc genhtml_legend=1 00:06:10.111 --rc geninfo_all_blocks=1 00:06:10.111 --rc geninfo_unexecuted_blocks=1 00:06:10.111 00:06:10.111 ' 00:06:10.111 05:06:23 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:10.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.111 --rc genhtml_branch_coverage=1 00:06:10.111 --rc genhtml_function_coverage=1 00:06:10.111 --rc genhtml_legend=1 00:06:10.111 --rc geninfo_all_blocks=1 00:06:10.111 --rc geninfo_unexecuted_blocks=1 00:06:10.111 00:06:10.111 ' 00:06:10.111 05:06:23 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:10.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.111 --rc genhtml_branch_coverage=1 00:06:10.111 --rc genhtml_function_coverage=1 00:06:10.111 --rc genhtml_legend=1 00:06:10.111 --rc geninfo_all_blocks=1 00:06:10.111 --rc geninfo_unexecuted_blocks=1 00:06:10.111 00:06:10.111 ' 00:06:10.111 05:06:23 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:10.111 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.111 --rc genhtml_branch_coverage=1 00:06:10.111 --rc genhtml_function_coverage=1 00:06:10.111 --rc genhtml_legend=1 00:06:10.111 --rc geninfo_all_blocks=1 00:06:10.111 --rc geninfo_unexecuted_blocks=1 00:06:10.111 00:06:10.111 ' 00:06:10.111 05:06:23 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:06:10.111 05:06:23 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:06:10.111 05:06:23 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:06:10.111 05:06:23 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:06:10.111 05:06:23 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:10.111 05:06:23 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.111 05:06:23 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.111 ************************************ 00:06:10.111 START TEST default_locks 00:06:10.111 ************************************ 00:06:10.111 05:06:23 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:06:10.111 05:06:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=116034 00:06:10.111 05:06:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 116034 00:06:10.111 05:06:23 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:10.111 05:06:23 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 116034 ']' 00:06:10.111 05:06:23 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:10.111 05:06:23 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:10.111 05:06:23 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:10.112 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:10.112 05:06:23 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:10.112 05:06:23 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:10.371 [2024-12-15 05:06:23.814629] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:10.371 [2024-12-15 05:06:23.814669] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116034 ] 00:06:10.371 [2024-12-15 05:06:23.886279] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.371 [2024-12-15 05:06:23.908978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.629 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:10.629 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:06:10.629 05:06:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 116034 00:06:10.630 05:06:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 116034 00:06:10.630 05:06:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:10.889 lslocks: write error 00:06:10.889 05:06:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 116034 00:06:10.889 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 116034 ']' 00:06:10.889 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 116034 00:06:10.889 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:06:10.889 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:10.889 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116034 00:06:10.889 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:10.889 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:10.889 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116034' 00:06:10.889 killing process with pid 116034 00:06:10.889 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 116034 00:06:10.889 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 116034 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 116034 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 116034 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 116034 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 116034 ']' 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.148 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.148 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (116034) - No such process 00:06:11.148 ERROR: process (pid: 116034) is no longer running 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.148 00:06:11.148 real 0m1.065s 00:06:11.148 user 0m1.026s 00:06:11.148 sys 0m0.493s 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:11.148 05:06:24 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.149 ************************************ 00:06:11.149 END TEST default_locks 00:06:11.149 ************************************ 00:06:11.408 05:06:24 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:06:11.408 05:06:24 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:11.408 05:06:24 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:11.408 05:06:24 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:11.408 ************************************ 00:06:11.408 START TEST default_locks_via_rpc 00:06:11.408 ************************************ 00:06:11.408 05:06:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:06:11.408 05:06:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=116250 00:06:11.408 05:06:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 116250 00:06:11.408 05:06:24 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:11.408 05:06:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 116250 ']' 00:06:11.408 05:06:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:11.408 05:06:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:11.408 05:06:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:11.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:11.408 05:06:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:11.408 05:06:24 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.408 [2024-12-15 05:06:24.947593] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:11.408 [2024-12-15 05:06:24.947633] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116250 ] 00:06:11.408 [2024-12-15 05:06:25.006443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.408 [2024-12-15 05:06:25.028958] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 116250 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 116250 00:06:11.666 05:06:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:12.233 05:06:25 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 116250 00:06:12.233 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 116250 ']' 00:06:12.233 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 116250 00:06:12.233 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:06:12.233 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:12.233 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116250 00:06:12.233 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:12.233 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:12.233 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116250' 00:06:12.233 killing process with pid 116250 00:06:12.233 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 116250 00:06:12.233 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 116250 00:06:12.493 00:06:12.493 real 0m1.089s 00:06:12.493 user 0m1.061s 00:06:12.493 sys 0m0.499s 00:06:12.493 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.493 05:06:25 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:12.493 ************************************ 00:06:12.493 END TEST default_locks_via_rpc 00:06:12.493 ************************************ 00:06:12.493 05:06:26 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:06:12.493 05:06:26 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:12.493 05:06:26 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.493 05:06:26 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:12.493 ************************************ 00:06:12.493 START TEST non_locking_app_on_locked_coremask 00:06:12.493 ************************************ 00:06:12.493 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:06:12.493 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=116500 00:06:12.493 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 116500 /var/tmp/spdk.sock 00:06:12.493 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:12.493 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116500 ']' 00:06:12.493 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:12.493 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.493 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:12.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:12.493 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.493 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:12.493 [2024-12-15 05:06:26.099870] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:12.493 [2024-12-15 05:06:26.099908] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116500 ] 00:06:12.493 [2024-12-15 05:06:26.171379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.752 [2024-12-15 05:06:26.194825] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.752 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:12.752 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:12.752 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=116510 00:06:12.752 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 116510 /var/tmp/spdk2.sock 00:06:12.752 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:06:12.752 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116510 ']' 00:06:12.752 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:12.752 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:12.752 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:12.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:12.752 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:12.752 05:06:26 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:13.011 [2024-12-15 05:06:26.443042] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:13.011 [2024-12-15 05:06:26.443090] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116510 ] 00:06:13.011 [2024-12-15 05:06:26.531157] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:13.011 [2024-12-15 05:06:26.531180] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:13.011 [2024-12-15 05:06:26.573256] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.947 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:13.947 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:13.948 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 116500 00:06:13.948 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 116500 00:06:13.948 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:14.207 lslocks: write error 00:06:14.207 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 116500 00:06:14.207 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116500 ']' 00:06:14.207 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 116500 00:06:14.207 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.207 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.207 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116500 00:06:14.207 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.207 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.207 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116500' 00:06:14.207 killing process with pid 116500 00:06:14.207 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 116500 00:06:14.207 05:06:27 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 116500 00:06:14.775 05:06:28 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 116510 00:06:14.775 05:06:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116510 ']' 00:06:14.775 05:06:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 116510 00:06:14.775 05:06:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:14.775 05:06:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:14.775 05:06:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116510 00:06:14.775 05:06:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:14.775 05:06:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:14.775 05:06:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116510' 00:06:14.775 killing process with pid 116510 00:06:14.775 05:06:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 116510 00:06:14.775 05:06:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 116510 00:06:15.343 00:06:15.343 real 0m2.688s 00:06:15.343 user 0m2.849s 00:06:15.343 sys 0m0.905s 00:06:15.343 05:06:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:15.343 05:06:28 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.343 ************************************ 00:06:15.343 END TEST non_locking_app_on_locked_coremask 00:06:15.343 ************************************ 00:06:15.343 05:06:28 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:06:15.343 05:06:28 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:15.343 05:06:28 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:15.343 05:06:28 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:15.343 ************************************ 00:06:15.343 START TEST locking_app_on_unlocked_coremask 00:06:15.343 ************************************ 00:06:15.343 05:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:06:15.343 05:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=116986 00:06:15.343 05:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 116986 /var/tmp/spdk.sock 00:06:15.343 05:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:06:15.343 05:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 116986 ']' 00:06:15.343 05:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:15.343 05:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.343 05:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:15.343 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:15.343 05:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.343 05:06:28 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.343 [2024-12-15 05:06:28.863262] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:15.343 [2024-12-15 05:06:28.863314] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid116986 ] 00:06:15.343 [2024-12-15 05:06:28.936660] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:15.343 [2024-12-15 05:06:28.936686] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.343 [2024-12-15 05:06:28.956885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.602 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:15.602 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:15.602 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=117000 00:06:15.602 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 117000 /var/tmp/spdk2.sock 00:06:15.602 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:15.602 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117000 ']' 00:06:15.602 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:15.602 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:15.602 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:15.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:15.602 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:15.602 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:15.602 [2024-12-15 05:06:29.211499] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:15.602 [2024-12-15 05:06:29.211547] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117000 ] 00:06:15.861 [2024-12-15 05:06:29.302296] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.861 [2024-12-15 05:06:29.346748] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.120 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:16.120 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:16.120 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 117000 00:06:16.120 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 117000 00:06:16.120 05:06:29 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:16.688 lslocks: write error 00:06:16.688 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 116986 00:06:16.688 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 116986 ']' 00:06:16.688 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 116986 00:06:16.688 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:16.688 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:16.688 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 116986 00:06:16.688 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:16.688 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:16.688 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 116986' 00:06:16.688 killing process with pid 116986 00:06:16.688 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 116986 00:06:16.688 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 116986 00:06:17.256 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 117000 00:06:17.256 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 117000 ']' 00:06:17.256 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 117000 00:06:17.256 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:17.256 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:17.256 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117000 00:06:17.515 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:17.515 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:17.515 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117000' 00:06:17.515 killing process with pid 117000 00:06:17.515 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 117000 00:06:17.515 05:06:30 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 117000 00:06:17.774 00:06:17.774 real 0m2.447s 00:06:17.774 user 0m2.464s 00:06:17.774 sys 0m0.931s 00:06:17.774 05:06:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:17.774 05:06:31 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.774 ************************************ 00:06:17.774 END TEST locking_app_on_unlocked_coremask 00:06:17.774 ************************************ 00:06:17.774 05:06:31 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:06:17.774 05:06:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:17.774 05:06:31 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:17.774 05:06:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:17.774 ************************************ 00:06:17.774 START TEST locking_app_on_locked_coremask 00:06:17.774 ************************************ 00:06:17.774 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:06:17.774 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=117469 00:06:17.774 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 117469 /var/tmp/spdk.sock 00:06:17.774 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:06:17.774 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117469 ']' 00:06:17.774 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.774 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.774 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.774 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.774 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:17.774 [2024-12-15 05:06:31.378660] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:17.774 [2024-12-15 05:06:31.378700] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117469 ] 00:06:17.774 [2024-12-15 05:06:31.454440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.033 [2024-12-15 05:06:31.477428] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=117475 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 117475 /var/tmp/spdk2.sock 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 117475 /var/tmp/spdk2.sock 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 117475 /var/tmp/spdk2.sock 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 117475 ']' 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:18.033 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:18.033 05:06:31 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:18.292 [2024-12-15 05:06:31.727908] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:18.292 [2024-12-15 05:06:31.727953] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117475 ] 00:06:18.292 [2024-12-15 05:06:31.812202] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 117469 has claimed it. 00:06:18.292 [2024-12-15 05:06:31.812230] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:18.859 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (117475) - No such process 00:06:18.859 ERROR: process (pid: 117475) is no longer running 00:06:18.859 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:18.859 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:18.859 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:18.859 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:18.859 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:18.859 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:18.859 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 117469 00:06:18.859 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 117469 00:06:18.859 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:06:19.426 lslocks: write error 00:06:19.426 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 117469 00:06:19.426 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 117469 ']' 00:06:19.426 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 117469 00:06:19.426 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:06:19.426 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:19.426 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117469 00:06:19.426 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:19.426 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:19.426 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117469' 00:06:19.426 killing process with pid 117469 00:06:19.426 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 117469 00:06:19.426 05:06:32 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 117469 00:06:19.686 00:06:19.686 real 0m1.953s 00:06:19.686 user 0m2.081s 00:06:19.686 sys 0m0.672s 00:06:19.686 05:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:19.686 05:06:33 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.686 ************************************ 00:06:19.686 END TEST locking_app_on_locked_coremask 00:06:19.686 ************************************ 00:06:19.686 05:06:33 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:06:19.686 05:06:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:19.686 05:06:33 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:19.686 05:06:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:19.686 ************************************ 00:06:19.686 START TEST locking_overlapped_coremask 00:06:19.686 ************************************ 00:06:19.686 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:06:19.686 05:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=117821 00:06:19.686 05:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 117821 /var/tmp/spdk.sock 00:06:19.686 05:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:06:19.686 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 117821 ']' 00:06:19.686 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:19.686 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:19.686 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:19.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:19.686 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:19.686 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:19.945 [2024-12-15 05:06:33.398901] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:19.945 [2024-12-15 05:06:33.398945] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117821 ] 00:06:19.945 [2024-12-15 05:06:33.471971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:19.945 [2024-12-15 05:06:33.497253] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:19.945 [2024-12-15 05:06:33.497360] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.945 [2024-12-15 05:06:33.497361] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:20.204 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.204 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=117951 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 117951 /var/tmp/spdk2.sock 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 117951 /var/tmp/spdk2.sock 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 117951 /var/tmp/spdk2.sock 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 117951 ']' 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:20.205 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:20.205 05:06:33 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:20.205 [2024-12-15 05:06:33.752057] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:20.205 [2024-12-15 05:06:33.752107] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid117951 ] 00:06:20.205 [2024-12-15 05:06:33.840304] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 117821 has claimed it. 00:06:20.205 [2024-12-15 05:06:33.840338] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:06:20.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (117951) - No such process 00:06:20.773 ERROR: process (pid: 117951) is no longer running 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 117821 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 117821 ']' 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 117821 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 117821 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 117821' 00:06:20.773 killing process with pid 117821 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 117821 00:06:20.773 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 117821 00:06:21.342 00:06:21.342 real 0m1.385s 00:06:21.342 user 0m3.860s 00:06:21.342 sys 0m0.388s 00:06:21.342 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:21.342 05:06:34 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:06:21.342 ************************************ 00:06:21.342 END TEST locking_overlapped_coremask 00:06:21.342 ************************************ 00:06:21.342 05:06:34 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:06:21.342 05:06:34 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:21.342 05:06:34 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:21.342 05:06:34 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:21.342 ************************************ 00:06:21.342 START TEST locking_overlapped_coremask_via_rpc 00:06:21.342 ************************************ 00:06:21.342 05:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:06:21.342 05:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=118137 00:06:21.342 05:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 118137 /var/tmp/spdk.sock 00:06:21.342 05:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:06:21.342 05:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 118137 ']' 00:06:21.342 05:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:21.342 05:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.342 05:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:21.342 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:21.342 05:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.342 05:06:34 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.342 [2024-12-15 05:06:34.855627] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:21.342 [2024-12-15 05:06:34.855668] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118137 ] 00:06:21.342 [2024-12-15 05:06:34.933365] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.342 [2024-12-15 05:06:34.933390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.342 [2024-12-15 05:06:34.958450] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:21.342 [2024-12-15 05:06:34.958556] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.342 [2024-12-15 05:06:34.958558] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.601 05:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:21.601 05:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:21.601 05:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=118206 00:06:21.601 05:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 118206 /var/tmp/spdk2.sock 00:06:21.601 05:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:06:21.601 05:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 118206 ']' 00:06:21.601 05:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:21.601 05:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:21.601 05:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:21.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:21.601 05:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:21.601 05:06:35 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:21.601 [2024-12-15 05:06:35.199214] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:21.601 [2024-12-15 05:06:35.199260] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118206 ] 00:06:21.860 [2024-12-15 05:06:35.288687] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:06:21.860 [2024-12-15 05:06:35.288713] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:21.860 [2024-12-15 05:06:35.337416] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:21.860 [2024-12-15 05:06:35.337451] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:21.860 [2024-12-15 05:06:35.337452] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.427 [2024-12-15 05:06:36.045062] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 118137 has claimed it. 00:06:22.427 request: 00:06:22.427 { 00:06:22.427 "method": "framework_enable_cpumask_locks", 00:06:22.427 "req_id": 1 00:06:22.427 } 00:06:22.427 Got JSON-RPC error response 00:06:22.427 response: 00:06:22.427 { 00:06:22.427 "code": -32603, 00:06:22.427 "message": "Failed to claim CPU core: 2" 00:06:22.427 } 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 118137 /var/tmp/spdk.sock 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 118137 ']' 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:22.427 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.427 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.686 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.686 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:22.686 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 118206 /var/tmp/spdk2.sock 00:06:22.686 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 118206 ']' 00:06:22.686 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:06:22.686 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:22.686 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:06:22.686 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:06:22.686 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:22.686 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.945 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:22.945 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:06:22.945 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:06:22.945 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:06:22.945 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:06:22.945 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:06:22.945 00:06:22.945 real 0m1.675s 00:06:22.945 user 0m0.835s 00:06:22.945 sys 0m0.130s 00:06:22.945 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:22.945 05:06:36 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:22.945 ************************************ 00:06:22.945 END TEST locking_overlapped_coremask_via_rpc 00:06:22.945 ************************************ 00:06:22.945 05:06:36 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:06:22.945 05:06:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 118137 ]] 00:06:22.945 05:06:36 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 118137 00:06:22.945 05:06:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 118137 ']' 00:06:22.945 05:06:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 118137 00:06:22.945 05:06:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:22.945 05:06:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:22.945 05:06:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118137 00:06:22.945 05:06:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:22.945 05:06:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:22.945 05:06:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118137' 00:06:22.945 killing process with pid 118137 00:06:22.945 05:06:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 118137 00:06:22.945 05:06:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 118137 00:06:23.204 05:06:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 118206 ]] 00:06:23.204 05:06:36 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 118206 00:06:23.204 05:06:36 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 118206 ']' 00:06:23.204 05:06:36 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 118206 00:06:23.204 05:06:36 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:06:23.204 05:06:36 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:23.204 05:06:36 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 118206 00:06:23.463 05:06:36 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:06:23.463 05:06:36 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:06:23.463 05:06:36 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 118206' 00:06:23.463 killing process with pid 118206 00:06:23.463 05:06:36 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 118206 00:06:23.463 05:06:36 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 118206 00:06:23.722 05:06:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.722 05:06:37 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:06:23.722 05:06:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 118137 ]] 00:06:23.722 05:06:37 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 118137 00:06:23.722 05:06:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 118137 ']' 00:06:23.722 05:06:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 118137 00:06:23.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (118137) - No such process 00:06:23.722 05:06:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 118137 is not found' 00:06:23.722 Process with pid 118137 is not found 00:06:23.722 05:06:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 118206 ]] 00:06:23.722 05:06:37 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 118206 00:06:23.722 05:06:37 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 118206 ']' 00:06:23.722 05:06:37 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 118206 00:06:23.722 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (118206) - No such process 00:06:23.722 05:06:37 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 118206 is not found' 00:06:23.722 Process with pid 118206 is not found 00:06:23.722 05:06:37 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:06:23.722 00:06:23.722 real 0m13.684s 00:06:23.722 user 0m23.914s 00:06:23.722 sys 0m4.981s 00:06:23.722 05:06:37 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.722 05:06:37 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:06:23.722 ************************************ 00:06:23.722 END TEST cpu_locks 00:06:23.722 ************************************ 00:06:23.722 00:06:23.722 real 0m38.382s 00:06:23.722 user 1m13.502s 00:06:23.722 sys 0m8.482s 00:06:23.722 05:06:37 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.722 05:06:37 event -- common/autotest_common.sh@10 -- # set +x 00:06:23.722 ************************************ 00:06:23.722 END TEST event 00:06:23.722 ************************************ 00:06:23.722 05:06:37 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:23.722 05:06:37 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:23.722 05:06:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.722 05:06:37 -- common/autotest_common.sh@10 -- # set +x 00:06:23.722 ************************************ 00:06:23.722 START TEST thread 00:06:23.722 ************************************ 00:06:23.722 05:06:37 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:06:23.982 * Looking for test storage... 00:06:23.982 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:06:23.982 05:06:37 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:23.982 05:06:37 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:06:23.982 05:06:37 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:23.982 05:06:37 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:23.982 05:06:37 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:23.982 05:06:37 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:23.982 05:06:37 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:23.982 05:06:37 thread -- scripts/common.sh@336 -- # IFS=.-: 00:06:23.982 05:06:37 thread -- scripts/common.sh@336 -- # read -ra ver1 00:06:23.982 05:06:37 thread -- scripts/common.sh@337 -- # IFS=.-: 00:06:23.982 05:06:37 thread -- scripts/common.sh@337 -- # read -ra ver2 00:06:23.982 05:06:37 thread -- scripts/common.sh@338 -- # local 'op=<' 00:06:23.982 05:06:37 thread -- scripts/common.sh@340 -- # ver1_l=2 00:06:23.982 05:06:37 thread -- scripts/common.sh@341 -- # ver2_l=1 00:06:23.982 05:06:37 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:23.982 05:06:37 thread -- scripts/common.sh@344 -- # case "$op" in 00:06:23.982 05:06:37 thread -- scripts/common.sh@345 -- # : 1 00:06:23.982 05:06:37 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:23.982 05:06:37 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:23.982 05:06:37 thread -- scripts/common.sh@365 -- # decimal 1 00:06:23.982 05:06:37 thread -- scripts/common.sh@353 -- # local d=1 00:06:23.982 05:06:37 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:23.982 05:06:37 thread -- scripts/common.sh@355 -- # echo 1 00:06:23.982 05:06:37 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:06:23.982 05:06:37 thread -- scripts/common.sh@366 -- # decimal 2 00:06:23.982 05:06:37 thread -- scripts/common.sh@353 -- # local d=2 00:06:23.982 05:06:37 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:23.982 05:06:37 thread -- scripts/common.sh@355 -- # echo 2 00:06:23.982 05:06:37 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:06:23.982 05:06:37 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:23.982 05:06:37 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:23.982 05:06:37 thread -- scripts/common.sh@368 -- # return 0 00:06:23.982 05:06:37 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:23.982 05:06:37 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:23.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.982 --rc genhtml_branch_coverage=1 00:06:23.982 --rc genhtml_function_coverage=1 00:06:23.982 --rc genhtml_legend=1 00:06:23.982 --rc geninfo_all_blocks=1 00:06:23.982 --rc geninfo_unexecuted_blocks=1 00:06:23.982 00:06:23.982 ' 00:06:23.982 05:06:37 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:23.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.982 --rc genhtml_branch_coverage=1 00:06:23.982 --rc genhtml_function_coverage=1 00:06:23.982 --rc genhtml_legend=1 00:06:23.982 --rc geninfo_all_blocks=1 00:06:23.982 --rc geninfo_unexecuted_blocks=1 00:06:23.982 00:06:23.982 ' 00:06:23.982 05:06:37 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:23.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.982 --rc genhtml_branch_coverage=1 00:06:23.982 --rc genhtml_function_coverage=1 00:06:23.982 --rc genhtml_legend=1 00:06:23.982 --rc geninfo_all_blocks=1 00:06:23.982 --rc geninfo_unexecuted_blocks=1 00:06:23.982 00:06:23.982 ' 00:06:23.982 05:06:37 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:23.982 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:23.982 --rc genhtml_branch_coverage=1 00:06:23.982 --rc genhtml_function_coverage=1 00:06:23.982 --rc genhtml_legend=1 00:06:23.982 --rc geninfo_all_blocks=1 00:06:23.982 --rc geninfo_unexecuted_blocks=1 00:06:23.982 00:06:23.982 ' 00:06:23.982 05:06:37 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.982 05:06:37 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:23.982 05:06:37 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.982 05:06:37 thread -- common/autotest_common.sh@10 -- # set +x 00:06:23.982 ************************************ 00:06:23.982 START TEST thread_poller_perf 00:06:23.982 ************************************ 00:06:23.982 05:06:37 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:06:23.982 [2024-12-15 05:06:37.572114] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:23.982 [2024-12-15 05:06:37.572182] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118714 ] 00:06:23.982 [2024-12-15 05:06:37.648238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.241 [2024-12-15 05:06:37.670570] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.241 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:25.177 [2024-12-15T04:06:38.864Z] ====================================== 00:06:25.177 [2024-12-15T04:06:38.864Z] busy:2105229222 (cyc) 00:06:25.177 [2024-12-15T04:06:38.864Z] total_run_count: 424000 00:06:25.177 [2024-12-15T04:06:38.864Z] tsc_hz: 2100000000 (cyc) 00:06:25.177 [2024-12-15T04:06:38.864Z] ====================================== 00:06:25.177 [2024-12-15T04:06:38.864Z] poller_cost: 4965 (cyc), 2364 (nsec) 00:06:25.177 00:06:25.177 real 0m1.156s 00:06:25.177 user 0m1.078s 00:06:25.177 sys 0m0.075s 00:06:25.177 05:06:38 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:25.177 05:06:38 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:25.177 ************************************ 00:06:25.177 END TEST thread_poller_perf 00:06:25.177 ************************************ 00:06:25.177 05:06:38 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.177 05:06:38 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:06:25.177 05:06:38 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.177 05:06:38 thread -- common/autotest_common.sh@10 -- # set +x 00:06:25.177 ************************************ 00:06:25.177 START TEST thread_poller_perf 00:06:25.177 ************************************ 00:06:25.177 05:06:38 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:25.177 [2024-12-15 05:06:38.800194] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:25.177 [2024-12-15 05:06:38.800258] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid118879 ] 00:06:25.436 [2024-12-15 05:06:38.875683] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.436 [2024-12-15 05:06:38.897355] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.436 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:26.373 [2024-12-15T04:06:40.060Z] ====================================== 00:06:26.373 [2024-12-15T04:06:40.060Z] busy:2101622668 (cyc) 00:06:26.373 [2024-12-15T04:06:40.060Z] total_run_count: 5186000 00:06:26.373 [2024-12-15T04:06:40.060Z] tsc_hz: 2100000000 (cyc) 00:06:26.373 [2024-12-15T04:06:40.060Z] ====================================== 00:06:26.373 [2024-12-15T04:06:40.060Z] poller_cost: 405 (cyc), 192 (nsec) 00:06:26.373 00:06:26.373 real 0m1.150s 00:06:26.373 user 0m1.073s 00:06:26.373 sys 0m0.073s 00:06:26.373 05:06:39 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.373 05:06:39 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:26.373 ************************************ 00:06:26.373 END TEST thread_poller_perf 00:06:26.373 ************************************ 00:06:26.373 05:06:39 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:26.373 00:06:26.373 real 0m2.626s 00:06:26.373 user 0m2.296s 00:06:26.373 sys 0m0.346s 00:06:26.373 05:06:39 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:26.373 05:06:39 thread -- common/autotest_common.sh@10 -- # set +x 00:06:26.373 ************************************ 00:06:26.373 END TEST thread 00:06:26.373 ************************************ 00:06:26.373 05:06:39 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:06:26.373 05:06:39 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:26.373 05:06:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:26.373 05:06:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:26.373 05:06:39 -- common/autotest_common.sh@10 -- # set +x 00:06:26.373 ************************************ 00:06:26.373 START TEST app_cmdline 00:06:26.373 ************************************ 00:06:26.373 05:06:40 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:26.632 * Looking for test storage... 00:06:26.632 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:26.632 05:06:40 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:26.632 05:06:40 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:06:26.632 05:06:40 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:26.632 05:06:40 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@345 -- # : 1 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:26.632 05:06:40 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:06:26.633 05:06:40 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:06:26.633 05:06:40 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:26.633 05:06:40 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:06:26.633 05:06:40 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:06:26.633 05:06:40 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:06:26.633 05:06:40 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:06:26.633 05:06:40 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:26.633 05:06:40 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:06:26.633 05:06:40 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:06:26.633 05:06:40 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:26.633 05:06:40 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:26.633 05:06:40 app_cmdline -- scripts/common.sh@368 -- # return 0 00:06:26.633 05:06:40 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:26.633 05:06:40 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:26.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.633 --rc genhtml_branch_coverage=1 00:06:26.633 --rc genhtml_function_coverage=1 00:06:26.633 --rc genhtml_legend=1 00:06:26.633 --rc geninfo_all_blocks=1 00:06:26.633 --rc geninfo_unexecuted_blocks=1 00:06:26.633 00:06:26.633 ' 00:06:26.633 05:06:40 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:26.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.633 --rc genhtml_branch_coverage=1 00:06:26.633 --rc genhtml_function_coverage=1 00:06:26.633 --rc genhtml_legend=1 00:06:26.633 --rc geninfo_all_blocks=1 00:06:26.633 --rc geninfo_unexecuted_blocks=1 00:06:26.633 00:06:26.633 ' 00:06:26.633 05:06:40 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:26.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.633 --rc genhtml_branch_coverage=1 00:06:26.633 --rc genhtml_function_coverage=1 00:06:26.633 --rc genhtml_legend=1 00:06:26.633 --rc geninfo_all_blocks=1 00:06:26.633 --rc geninfo_unexecuted_blocks=1 00:06:26.633 00:06:26.633 ' 00:06:26.633 05:06:40 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:26.633 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:26.633 --rc genhtml_branch_coverage=1 00:06:26.633 --rc genhtml_function_coverage=1 00:06:26.633 --rc genhtml_legend=1 00:06:26.633 --rc geninfo_all_blocks=1 00:06:26.633 --rc geninfo_unexecuted_blocks=1 00:06:26.633 00:06:26.633 ' 00:06:26.633 05:06:40 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:26.633 05:06:40 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=119205 00:06:26.633 05:06:40 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 119205 00:06:26.633 05:06:40 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:26.633 05:06:40 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 119205 ']' 00:06:26.633 05:06:40 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.633 05:06:40 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:26.633 05:06:40 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.633 05:06:40 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:26.633 05:06:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:26.633 [2024-12-15 05:06:40.258812] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:26.633 [2024-12-15 05:06:40.258864] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid119205 ] 00:06:26.892 [2024-12-15 05:06:40.333030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.892 [2024-12-15 05:06:40.355665] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.892 05:06:40 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.892 05:06:40 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:06:26.892 05:06:40 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:27.151 { 00:06:27.151 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:06:27.151 "fields": { 00:06:27.151 "major": 25, 00:06:27.151 "minor": 1, 00:06:27.151 "patch": 0, 00:06:27.151 "suffix": "-pre", 00:06:27.151 "commit": "e01cb43b8" 00:06:27.151 } 00:06:27.151 } 00:06:27.151 05:06:40 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:27.151 05:06:40 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:27.151 05:06:40 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:27.151 05:06:40 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:27.151 05:06:40 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:27.151 05:06:40 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:27.151 05:06:40 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:27.151 05:06:40 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:27.151 05:06:40 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:27.151 05:06:40 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:27.151 05:06:40 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:27.410 request: 00:06:27.410 { 00:06:27.410 "method": "env_dpdk_get_mem_stats", 00:06:27.410 "req_id": 1 00:06:27.410 } 00:06:27.410 Got JSON-RPC error response 00:06:27.410 response: 00:06:27.410 { 00:06:27.410 "code": -32601, 00:06:27.410 "message": "Method not found" 00:06:27.410 } 00:06:27.410 05:06:40 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:06:27.410 05:06:40 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:27.410 05:06:40 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:27.410 05:06:40 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:27.410 05:06:40 app_cmdline -- app/cmdline.sh@1 -- # killprocess 119205 00:06:27.410 05:06:40 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 119205 ']' 00:06:27.410 05:06:40 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 119205 00:06:27.410 05:06:40 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:06:27.410 05:06:40 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:27.410 05:06:40 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 119205 00:06:27.410 05:06:41 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:27.410 05:06:41 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:27.410 05:06:41 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 119205' 00:06:27.410 killing process with pid 119205 00:06:27.410 05:06:41 app_cmdline -- common/autotest_common.sh@973 -- # kill 119205 00:06:27.410 05:06:41 app_cmdline -- common/autotest_common.sh@978 -- # wait 119205 00:06:27.670 00:06:27.670 real 0m1.293s 00:06:27.670 user 0m1.503s 00:06:27.670 sys 0m0.460s 00:06:27.670 05:06:41 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:27.670 05:06:41 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:27.670 ************************************ 00:06:27.670 END TEST app_cmdline 00:06:27.670 ************************************ 00:06:27.929 05:06:41 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:27.929 05:06:41 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:27.929 05:06:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:27.929 05:06:41 -- common/autotest_common.sh@10 -- # set +x 00:06:27.929 ************************************ 00:06:27.929 START TEST version 00:06:27.929 ************************************ 00:06:27.929 05:06:41 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:27.929 * Looking for test storage... 00:06:27.929 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:27.929 05:06:41 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:27.929 05:06:41 version -- common/autotest_common.sh@1711 -- # lcov --version 00:06:27.929 05:06:41 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:27.930 05:06:41 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:27.930 05:06:41 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:27.930 05:06:41 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:27.930 05:06:41 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:27.930 05:06:41 version -- scripts/common.sh@336 -- # IFS=.-: 00:06:27.930 05:06:41 version -- scripts/common.sh@336 -- # read -ra ver1 00:06:27.930 05:06:41 version -- scripts/common.sh@337 -- # IFS=.-: 00:06:27.930 05:06:41 version -- scripts/common.sh@337 -- # read -ra ver2 00:06:27.930 05:06:41 version -- scripts/common.sh@338 -- # local 'op=<' 00:06:27.930 05:06:41 version -- scripts/common.sh@340 -- # ver1_l=2 00:06:27.930 05:06:41 version -- scripts/common.sh@341 -- # ver2_l=1 00:06:27.930 05:06:41 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:27.930 05:06:41 version -- scripts/common.sh@344 -- # case "$op" in 00:06:27.930 05:06:41 version -- scripts/common.sh@345 -- # : 1 00:06:27.930 05:06:41 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:27.930 05:06:41 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:27.930 05:06:41 version -- scripts/common.sh@365 -- # decimal 1 00:06:27.930 05:06:41 version -- scripts/common.sh@353 -- # local d=1 00:06:27.930 05:06:41 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:27.930 05:06:41 version -- scripts/common.sh@355 -- # echo 1 00:06:27.930 05:06:41 version -- scripts/common.sh@365 -- # ver1[v]=1 00:06:27.930 05:06:41 version -- scripts/common.sh@366 -- # decimal 2 00:06:27.930 05:06:41 version -- scripts/common.sh@353 -- # local d=2 00:06:27.930 05:06:41 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:27.930 05:06:41 version -- scripts/common.sh@355 -- # echo 2 00:06:27.930 05:06:41 version -- scripts/common.sh@366 -- # ver2[v]=2 00:06:27.930 05:06:41 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:27.930 05:06:41 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:27.930 05:06:41 version -- scripts/common.sh@368 -- # return 0 00:06:27.930 05:06:41 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:27.930 05:06:41 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:27.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.930 --rc genhtml_branch_coverage=1 00:06:27.930 --rc genhtml_function_coverage=1 00:06:27.930 --rc genhtml_legend=1 00:06:27.930 --rc geninfo_all_blocks=1 00:06:27.930 --rc geninfo_unexecuted_blocks=1 00:06:27.930 00:06:27.930 ' 00:06:27.930 05:06:41 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:27.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.930 --rc genhtml_branch_coverage=1 00:06:27.930 --rc genhtml_function_coverage=1 00:06:27.930 --rc genhtml_legend=1 00:06:27.930 --rc geninfo_all_blocks=1 00:06:27.930 --rc geninfo_unexecuted_blocks=1 00:06:27.930 00:06:27.930 ' 00:06:27.930 05:06:41 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:27.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.930 --rc genhtml_branch_coverage=1 00:06:27.930 --rc genhtml_function_coverage=1 00:06:27.930 --rc genhtml_legend=1 00:06:27.930 --rc geninfo_all_blocks=1 00:06:27.930 --rc geninfo_unexecuted_blocks=1 00:06:27.930 00:06:27.930 ' 00:06:27.930 05:06:41 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:27.930 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:27.930 --rc genhtml_branch_coverage=1 00:06:27.930 --rc genhtml_function_coverage=1 00:06:27.930 --rc genhtml_legend=1 00:06:27.930 --rc geninfo_all_blocks=1 00:06:27.930 --rc geninfo_unexecuted_blocks=1 00:06:27.930 00:06:27.930 ' 00:06:27.930 05:06:41 version -- app/version.sh@17 -- # get_header_version major 00:06:27.930 05:06:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:27.930 05:06:41 version -- app/version.sh@14 -- # cut -f2 00:06:27.930 05:06:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.930 05:06:41 version -- app/version.sh@17 -- # major=25 00:06:27.930 05:06:41 version -- app/version.sh@18 -- # get_header_version minor 00:06:27.930 05:06:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:27.930 05:06:41 version -- app/version.sh@14 -- # cut -f2 00:06:27.930 05:06:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.930 05:06:41 version -- app/version.sh@18 -- # minor=1 00:06:27.930 05:06:41 version -- app/version.sh@19 -- # get_header_version patch 00:06:27.930 05:06:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:27.930 05:06:41 version -- app/version.sh@14 -- # cut -f2 00:06:27.930 05:06:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.930 05:06:41 version -- app/version.sh@19 -- # patch=0 00:06:27.930 05:06:41 version -- app/version.sh@20 -- # get_header_version suffix 00:06:27.930 05:06:41 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:27.930 05:06:41 version -- app/version.sh@14 -- # cut -f2 00:06:27.930 05:06:41 version -- app/version.sh@14 -- # tr -d '"' 00:06:27.930 05:06:41 version -- app/version.sh@20 -- # suffix=-pre 00:06:27.930 05:06:41 version -- app/version.sh@22 -- # version=25.1 00:06:27.930 05:06:41 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:27.930 05:06:41 version -- app/version.sh@28 -- # version=25.1rc0 00:06:27.930 05:06:41 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:27.930 05:06:41 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:28.190 05:06:41 version -- app/version.sh@30 -- # py_version=25.1rc0 00:06:28.190 05:06:41 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:06:28.190 00:06:28.190 real 0m0.248s 00:06:28.190 user 0m0.154s 00:06:28.190 sys 0m0.138s 00:06:28.190 05:06:41 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:28.190 05:06:41 version -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 ************************************ 00:06:28.190 END TEST version 00:06:28.190 ************************************ 00:06:28.190 05:06:41 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:06:28.190 05:06:41 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:06:28.190 05:06:41 -- spdk/autotest.sh@194 -- # uname -s 00:06:28.190 05:06:41 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:06:28.190 05:06:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:28.190 05:06:41 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:06:28.190 05:06:41 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:06:28.190 05:06:41 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:28.190 05:06:41 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:28.190 05:06:41 -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:28.190 05:06:41 -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 05:06:41 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:28.190 05:06:41 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:06:28.190 05:06:41 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:06:28.190 05:06:41 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:06:28.190 05:06:41 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:06:28.190 05:06:41 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:06:28.190 05:06:41 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:28.190 05:06:41 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:28.190 05:06:41 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.190 05:06:41 -- common/autotest_common.sh@10 -- # set +x 00:06:28.190 ************************************ 00:06:28.190 START TEST nvmf_tcp 00:06:28.190 ************************************ 00:06:28.190 05:06:41 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:28.190 * Looking for test storage... 00:06:28.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:28.190 05:06:41 nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.190 05:06:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.190 05:06:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.449 05:06:41 nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.449 05:06:41 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:06:28.449 05:06:41 nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.449 05:06:41 nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.449 --rc genhtml_branch_coverage=1 00:06:28.449 --rc genhtml_function_coverage=1 00:06:28.449 --rc genhtml_legend=1 00:06:28.449 --rc geninfo_all_blocks=1 00:06:28.449 --rc geninfo_unexecuted_blocks=1 00:06:28.449 00:06:28.449 ' 00:06:28.449 05:06:41 nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.449 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.450 --rc genhtml_branch_coverage=1 00:06:28.450 --rc genhtml_function_coverage=1 00:06:28.450 --rc genhtml_legend=1 00:06:28.450 --rc geninfo_all_blocks=1 00:06:28.450 --rc geninfo_unexecuted_blocks=1 00:06:28.450 00:06:28.450 ' 00:06:28.450 05:06:41 nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.450 --rc genhtml_branch_coverage=1 00:06:28.450 --rc genhtml_function_coverage=1 00:06:28.450 --rc genhtml_legend=1 00:06:28.450 --rc geninfo_all_blocks=1 00:06:28.450 --rc geninfo_unexecuted_blocks=1 00:06:28.450 00:06:28.450 ' 00:06:28.450 05:06:41 nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.450 --rc genhtml_branch_coverage=1 00:06:28.450 --rc genhtml_function_coverage=1 00:06:28.450 --rc genhtml_legend=1 00:06:28.450 --rc geninfo_all_blocks=1 00:06:28.450 --rc geninfo_unexecuted_blocks=1 00:06:28.450 00:06:28.450 ' 00:06:28.450 05:06:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:28.450 05:06:41 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:28.450 05:06:41 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:28.450 05:06:41 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:28.450 05:06:41 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.450 05:06:41 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:28.450 ************************************ 00:06:28.450 START TEST nvmf_target_core 00:06:28.450 ************************************ 00:06:28.450 05:06:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:28.450 * Looking for test storage... 00:06:28.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.450 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.710 --rc genhtml_branch_coverage=1 00:06:28.710 --rc genhtml_function_coverage=1 00:06:28.710 --rc genhtml_legend=1 00:06:28.710 --rc geninfo_all_blocks=1 00:06:28.710 --rc geninfo_unexecuted_blocks=1 00:06:28.710 00:06:28.710 ' 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.710 --rc genhtml_branch_coverage=1 00:06:28.710 --rc genhtml_function_coverage=1 00:06:28.710 --rc genhtml_legend=1 00:06:28.710 --rc geninfo_all_blocks=1 00:06:28.710 --rc geninfo_unexecuted_blocks=1 00:06:28.710 00:06:28.710 ' 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.710 --rc genhtml_branch_coverage=1 00:06:28.710 --rc genhtml_function_coverage=1 00:06:28.710 --rc genhtml_legend=1 00:06:28.710 --rc geninfo_all_blocks=1 00:06:28.710 --rc geninfo_unexecuted_blocks=1 00:06:28.710 00:06:28.710 ' 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.710 --rc genhtml_branch_coverage=1 00:06:28.710 --rc genhtml_function_coverage=1 00:06:28.710 --rc genhtml_legend=1 00:06:28.710 --rc geninfo_all_blocks=1 00:06:28.710 --rc geninfo_unexecuted_blocks=1 00:06:28.710 00:06:28.710 ' 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.710 05:06:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # : 0 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:28.711 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:28.711 ************************************ 00:06:28.711 START TEST nvmf_abort 00:06:28.711 ************************************ 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:28.711 * Looking for test storage... 00:06:28.711 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:28.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.711 --rc genhtml_branch_coverage=1 00:06:28.711 --rc genhtml_function_coverage=1 00:06:28.711 --rc genhtml_legend=1 00:06:28.711 --rc geninfo_all_blocks=1 00:06:28.711 --rc geninfo_unexecuted_blocks=1 00:06:28.711 00:06:28.711 ' 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:28.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.711 --rc genhtml_branch_coverage=1 00:06:28.711 --rc genhtml_function_coverage=1 00:06:28.711 --rc genhtml_legend=1 00:06:28.711 --rc geninfo_all_blocks=1 00:06:28.711 --rc geninfo_unexecuted_blocks=1 00:06:28.711 00:06:28.711 ' 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:28.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.711 --rc genhtml_branch_coverage=1 00:06:28.711 --rc genhtml_function_coverage=1 00:06:28.711 --rc genhtml_legend=1 00:06:28.711 --rc geninfo_all_blocks=1 00:06:28.711 --rc geninfo_unexecuted_blocks=1 00:06:28.711 00:06:28.711 ' 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:28.711 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:28.711 --rc genhtml_branch_coverage=1 00:06:28.711 --rc genhtml_function_coverage=1 00:06:28.711 --rc genhtml_legend=1 00:06:28.711 --rc geninfo_all_blocks=1 00:06:28.711 --rc geninfo_unexecuted_blocks=1 00:06:28.711 00:06:28.711 ' 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:28.711 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:28.971 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:28.971 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:28.972 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:28.972 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:28.972 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:28.972 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:28.972 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:28.972 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:28.972 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:28.972 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:28.972 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:28.972 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:06:28.972 05:06:42 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:35.544 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:35.545 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:35.545 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:35.545 Found net devices under 0000:af:00.0: cvl_0_0 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:35.545 Found net devices under 0000:af:00.1: cvl_0_1 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:35.545 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:35.545 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:06:35.545 00:06:35.545 --- 10.0.0.2 ping statistics --- 00:06:35.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.545 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:35.545 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:35.545 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.198 ms 00:06:35.545 00:06:35.545 --- 10.0.0.1 ping statistics --- 00:06:35.545 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:35.545 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=122804 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 122804 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 122804 ']' 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:35.545 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:35.545 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.545 [2024-12-15 05:06:48.516626] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:35.546 [2024-12-15 05:06:48.516667] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:35.546 [2024-12-15 05:06:48.590362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:35.546 [2024-12-15 05:06:48.614446] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:35.546 [2024-12-15 05:06:48.614482] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:35.546 [2024-12-15 05:06:48.614490] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:35.546 [2024-12-15 05:06:48.614496] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:35.546 [2024-12-15 05:06:48.614502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:35.546 [2024-12-15 05:06:48.615866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:35.546 [2024-12-15 05:06:48.615952] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:35.546 [2024-12-15 05:06:48.615953] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.546 [2024-12-15 05:06:48.755484] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.546 Malloc0 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.546 Delay0 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.546 [2024-12-15 05:06:48.839770] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.546 05:06:48 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:35.546 [2024-12-15 05:06:48.962783] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:37.451 Initializing NVMe Controllers 00:06:37.451 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:37.451 controller IO queue size 128 less than required 00:06:37.451 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:37.451 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:37.451 Initialization complete. Launching workers. 00:06:37.451 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38735 00:06:37.451 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38796, failed to submit 62 00:06:37.451 success 38739, unsuccessful 57, failed 0 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:06:37.451 rmmod nvme_tcp 00:06:37.451 rmmod nvme_fabrics 00:06:37.451 rmmod nvme_keyring 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 122804 ']' 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 122804 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 122804 ']' 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 122804 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:37.451 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 122804 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 122804' 00:06:37.709 killing process with pid 122804 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 122804 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 122804 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:37.709 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:37.710 05:06:51 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:06:40.245 00:06:40.245 real 0m11.217s 00:06:40.245 user 0m11.774s 00:06:40.245 sys 0m5.141s 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:40.245 ************************************ 00:06:40.245 END TEST nvmf_abort 00:06:40.245 ************************************ 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:40.245 ************************************ 00:06:40.245 START TEST nvmf_ns_hotplug_stress 00:06:40.245 ************************************ 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:40.245 * Looking for test storage... 00:06:40.245 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:40.245 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.245 --rc genhtml_branch_coverage=1 00:06:40.245 --rc genhtml_function_coverage=1 00:06:40.245 --rc genhtml_legend=1 00:06:40.245 --rc geninfo_all_blocks=1 00:06:40.245 --rc geninfo_unexecuted_blocks=1 00:06:40.245 00:06:40.245 ' 00:06:40.245 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:40.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.246 --rc genhtml_branch_coverage=1 00:06:40.246 --rc genhtml_function_coverage=1 00:06:40.246 --rc genhtml_legend=1 00:06:40.246 --rc geninfo_all_blocks=1 00:06:40.246 --rc geninfo_unexecuted_blocks=1 00:06:40.246 00:06:40.246 ' 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:40.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.246 --rc genhtml_branch_coverage=1 00:06:40.246 --rc genhtml_function_coverage=1 00:06:40.246 --rc genhtml_legend=1 00:06:40.246 --rc geninfo_all_blocks=1 00:06:40.246 --rc geninfo_unexecuted_blocks=1 00:06:40.246 00:06:40.246 ' 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:40.246 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:40.246 --rc genhtml_branch_coverage=1 00:06:40.246 --rc genhtml_function_coverage=1 00:06:40.246 --rc genhtml_legend=1 00:06:40.246 --rc geninfo_all_blocks=1 00:06:40.246 --rc geninfo_unexecuted_blocks=1 00:06:40.246 00:06:40.246 ' 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:06:40.246 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:06:40.246 05:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:46.820 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:46.820 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:06:46.820 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:06:46.820 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:06:46.820 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:06:46.820 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:06:46.821 Found 0000:af:00.0 (0x8086 - 0x159b) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:06:46.821 Found 0000:af:00.1 (0x8086 - 0x159b) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:06:46.821 Found net devices under 0000:af:00.0: cvl_0_0 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:06:46.821 Found net devices under 0000:af:00.1: cvl_0_1 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:46.821 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:06:46.822 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:46.822 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.357 ms 00:06:46.822 00:06:46.822 --- 10.0.0.2 ping statistics --- 00:06:46.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.822 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:46.822 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:46.822 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:06:46.822 00:06:46.822 --- 10.0.0.1 ping statistics --- 00:06:46.822 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:46.822 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=126869 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 126869 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 126869 ']' 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:46.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:46.822 [2024-12-15 05:06:59.740885] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:06:46.822 [2024-12-15 05:06:59.740929] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:46.822 [2024-12-15 05:06:59.818903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:46.822 [2024-12-15 05:06:59.839928] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:46.822 [2024-12-15 05:06:59.839964] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:46.822 [2024-12-15 05:06:59.839970] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:46.822 [2024-12-15 05:06:59.839976] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:46.822 [2024-12-15 05:06:59.839981] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:46.822 [2024-12-15 05:06:59.841211] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:46.822 [2024-12-15 05:06:59.841298] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:46.822 [2024-12-15 05:06:59.841299] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:46.822 05:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:46.822 [2024-12-15 05:07:00.153298] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:46.822 05:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:46.822 05:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:47.080 [2024-12-15 05:07:00.546679] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:47.080 05:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:47.339 05:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:47.339 Malloc0 00:06:47.339 05:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:47.597 Delay0 00:06:47.597 05:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:47.856 05:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:48.114 NULL1 00:06:48.114 05:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:48.114 05:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:48.114 05:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=127133 00:06:48.114 05:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:06:48.114 05:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:48.373 Read completed with error (sct=0, sc=11) 00:06:48.373 05:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.373 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:48.631 05:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:48.631 05:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:48.889 true 00:06:48.889 05:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:06:48.889 05:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:49.825 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:49.825 05:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:49.825 05:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:49.825 05:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:50.083 true 00:06:50.083 05:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:06:50.083 05:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:50.342 05:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:50.601 05:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:50.601 05:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:50.601 true 00:06:50.601 05:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:06:50.601 05:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:51.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.979 05:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:51.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.979 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:51.979 05:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:51.979 05:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:52.237 true 00:06:52.237 05:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:06:52.237 05:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.174 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:53.174 05:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.432 05:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:53.432 05:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:53.432 true 00:06:53.432 05:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:06:53.432 05:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:53.690 05:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:53.947 05:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:53.947 05:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:53.947 true 00:06:54.205 05:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:06:54.205 05:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:55.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.141 05:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.141 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.399 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:55.399 05:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:55.399 05:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:55.658 true 00:06:55.658 05:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:06:55.658 05:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.594 05:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.594 05:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:56.594 05:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:56.852 true 00:06:56.852 05:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:06:56.852 05:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.111 05:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.369 05:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:57.369 05:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:57.369 true 00:06:57.628 05:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:06:57.628 05:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.565 05:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:58.823 05:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:58.823 05:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:58.823 true 00:06:58.823 05:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:06:58.824 05:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:59.760 05:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.760 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:00.018 05:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:00.018 05:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:00.018 true 00:07:00.018 05:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:00.277 05:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.277 05:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.536 05:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:00.536 05:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:00.795 true 00:07:00.795 05:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:00.795 05:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.171 05:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.171 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:02.171 05:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:02.171 05:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:02.171 true 00:07:02.430 05:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:02.430 05:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.998 05:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.998 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:03.256 05:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:03.256 05:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:03.514 true 00:07:03.514 05:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:03.514 05:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.773 05:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.032 05:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:04.032 05:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:04.032 true 00:07:04.032 05:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:04.032 05:07:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.409 05:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.409 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:05.409 05:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:05.409 05:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:05.667 true 00:07:05.667 05:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:05.667 05:07:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.604 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:06.604 05:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.604 05:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:06.604 05:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:06.863 true 00:07:06.863 05:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:06.863 05:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.122 05:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.381 05:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:07.381 05:07:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:07.381 true 00:07:07.381 05:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:07.381 05:07:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.758 05:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.758 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:08.758 05:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:08.758 05:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:09.017 true 00:07:09.017 05:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:09.017 05:07:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.952 05:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.952 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:09.952 05:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:09.952 05:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:10.211 true 00:07:10.211 05:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:10.211 05:07:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.470 05:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.729 05:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:10.729 05:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:10.729 true 00:07:10.729 05:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:10.729 05:07:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.103 05:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.103 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:12.103 05:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:12.103 05:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:12.361 true 00:07:12.361 05:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:12.361 05:07:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.303 05:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.303 05:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:13.303 05:07:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:13.562 true 00:07:13.562 05:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:13.562 05:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.820 05:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.079 05:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:14.079 05:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:14.079 true 00:07:14.079 05:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:14.079 05:07:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.455 05:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.455 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:15.456 05:07:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:15.456 05:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:15.714 true 00:07:15.714 05:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:15.715 05:07:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.651 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:07:16.651 05:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.651 05:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:16.651 05:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:16.909 true 00:07:16.909 05:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:16.909 05:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.167 05:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.167 05:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:17.167 05:07:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:17.426 true 00:07:17.426 05:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:17.426 05:07:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.802 Initializing NVMe Controllers 00:07:18.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:18.802 Controller IO queue size 128, less than required. 00:07:18.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:18.802 Controller IO queue size 128, less than required. 00:07:18.802 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:18.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:07:18.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:18.802 Initialization complete. Launching workers. 00:07:18.802 ======================================================== 00:07:18.802 Latency(us) 00:07:18.802 Device Information : IOPS MiB/s Average min max 00:07:18.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2233.80 1.09 39789.72 2016.03 1013091.48 00:07:18.802 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18043.77 8.81 7093.36 1575.04 441148.85 00:07:18.802 ======================================================== 00:07:18.802 Total : 20277.57 9.90 10695.23 1575.04 1013091.48 00:07:18.802 00:07:18.802 05:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.802 05:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:18.802 05:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:19.061 true 00:07:19.061 05:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 127133 00:07:19.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (127133) - No such process 00:07:19.061 05:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 127133 00:07:19.061 05:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.061 05:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:19.319 05:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:19.320 05:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:19.320 05:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:19.320 05:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.320 05:07:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:19.578 null0 00:07:19.578 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.578 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.578 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:19.837 null1 00:07:19.837 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:19.837 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:19.837 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:20.096 null2 00:07:20.096 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.096 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.096 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:20.096 null3 00:07:20.096 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.096 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.096 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:20.354 null4 00:07:20.354 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.354 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.354 05:07:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:20.612 null5 00:07:20.612 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.612 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.612 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:20.871 null6 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:20.871 null7 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.871 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 132609 132611 132613 132618 132621 132624 132627 132630 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:20.872 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.131 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.131 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.131 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.131 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.131 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.131 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.131 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.131 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.390 05:07:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.649 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:21.649 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.649 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.649 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:21.649 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:21.649 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:21.649 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.649 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:21.908 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.909 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:21.909 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.168 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.428 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.428 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.428 05:07:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.428 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.428 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.428 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.428 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.428 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.687 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.946 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:22.946 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.946 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:22.946 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:22.946 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:22.946 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:22.946 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:22.946 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:22.946 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.946 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.946 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:22.947 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.947 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.947 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:22.947 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.947 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.947 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:22.947 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.947 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.947 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:22.947 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:22.947 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:22.947 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.206 05:07:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.465 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.724 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.724 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:23.724 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.724 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:23.724 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:23.724 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.724 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:23.724 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:23.984 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.244 05:07:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.503 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:24.503 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:24.503 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:24.503 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:24.503 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.503 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:24.503 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:24.503 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:24.762 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.762 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.762 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:24.762 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.762 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.762 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:24.762 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.762 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:24.763 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:25.024 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:25.025 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:25.025 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:25.025 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:25.025 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:07:25.025 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:25.025 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:07:25.025 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:25.025 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:25.285 rmmod nvme_tcp 00:07:25.285 rmmod nvme_fabrics 00:07:25.285 rmmod nvme_keyring 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 126869 ']' 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 126869 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 126869 ']' 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 126869 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 126869 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 126869' 00:07:25.286 killing process with pid 126869 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 126869 00:07:25.286 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 126869 00:07:25.544 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:25.544 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:25.544 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:25.544 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:07:25.544 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:07:25.544 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:25.544 05:07:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:07:25.544 05:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:25.544 05:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:25.544 05:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:25.544 05:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:25.544 05:07:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.448 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:27.448 00:07:27.448 real 0m47.576s 00:07:27.448 user 3m14.706s 00:07:27.448 sys 0m14.950s 00:07:27.448 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.448 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:27.448 ************************************ 00:07:27.448 END TEST nvmf_ns_hotplug_stress 00:07:27.448 ************************************ 00:07:27.449 05:07:41 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:27.449 05:07:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:27.449 05:07:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.449 05:07:41 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:27.708 ************************************ 00:07:27.708 START TEST nvmf_delete_subsystem 00:07:27.708 ************************************ 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:27.708 * Looking for test storage... 00:07:27.708 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.708 --rc genhtml_branch_coverage=1 00:07:27.708 --rc genhtml_function_coverage=1 00:07:27.708 --rc genhtml_legend=1 00:07:27.708 --rc geninfo_all_blocks=1 00:07:27.708 --rc geninfo_unexecuted_blocks=1 00:07:27.708 00:07:27.708 ' 00:07:27.708 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:27.708 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.708 --rc genhtml_branch_coverage=1 00:07:27.708 --rc genhtml_function_coverage=1 00:07:27.708 --rc genhtml_legend=1 00:07:27.708 --rc geninfo_all_blocks=1 00:07:27.708 --rc geninfo_unexecuted_blocks=1 00:07:27.708 00:07:27.709 ' 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:27.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.709 --rc genhtml_branch_coverage=1 00:07:27.709 --rc genhtml_function_coverage=1 00:07:27.709 --rc genhtml_legend=1 00:07:27.709 --rc geninfo_all_blocks=1 00:07:27.709 --rc geninfo_unexecuted_blocks=1 00:07:27.709 00:07:27.709 ' 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:27.709 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:27.709 --rc genhtml_branch_coverage=1 00:07:27.709 --rc genhtml_function_coverage=1 00:07:27.709 --rc genhtml_legend=1 00:07:27.709 --rc geninfo_all_blocks=1 00:07:27.709 --rc geninfo_unexecuted_blocks=1 00:07:27.709 00:07:27.709 ' 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:27.709 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:07:27.709 05:07:41 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.278 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:34.278 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:07:34.278 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:34.278 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:34.278 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:34.278 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:34.279 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:34.279 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:34.279 Found net devices under 0000:af:00.0: cvl_0_0 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:34.279 Found net devices under 0000:af:00.1: cvl_0_1 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:34.279 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:34.280 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:34.280 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.331 ms 00:07:34.280 00:07:34.280 --- 10.0.0.2 ping statistics --- 00:07:34.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.280 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:34.280 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:34.280 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.192 ms 00:07:34.280 00:07:34.280 --- 10.0.0.1 ping statistics --- 00:07:34.280 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:34.280 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=137126 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 137126 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 137126 ']' 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:34.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.280 [2024-12-15 05:07:47.492797] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:34.280 [2024-12-15 05:07:47.492838] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:34.280 [2024-12-15 05:07:47.571799] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:34.280 [2024-12-15 05:07:47.593400] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:34.280 [2024-12-15 05:07:47.593437] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:34.280 [2024-12-15 05:07:47.593444] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:34.280 [2024-12-15 05:07:47.593450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:34.280 [2024-12-15 05:07:47.593456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:34.280 [2024-12-15 05:07:47.594563] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:34.280 [2024-12-15 05:07:47.594564] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.280 [2024-12-15 05:07:47.726268] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.280 [2024-12-15 05:07:47.750480] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.280 NULL1 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.280 Delay0 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=137151 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:34.280 05:07:47 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:34.280 [2024-12-15 05:07:47.867331] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:36.183 05:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:36.183 05:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:36.183 05:07:49 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.442 Read completed with error (sct=0, sc=8) 00:07:36.442 Write completed with error (sct=0, sc=8) 00:07:36.442 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 starting I/O failed: -6 00:07:36.443 starting I/O failed: -6 00:07:36.443 starting I/O failed: -6 00:07:36.443 starting I/O failed: -6 00:07:36.443 starting I/O failed: -6 00:07:36.443 starting I/O failed: -6 00:07:36.443 starting I/O failed: -6 00:07:36.443 starting I/O failed: -6 00:07:36.443 starting I/O failed: -6 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 starting I/O failed: -6 00:07:36.443 [2024-12-15 05:07:49.987073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f122800d4d0 is same with the state(6) to be set 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Read completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 Write completed with error (sct=0, sc=8) 00:07:36.443 [2024-12-15 05:07:49.987443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f1228000c80 is same with the state(6) to be set 00:07:37.379 [2024-12-15 05:07:50.961832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b85260 is same with the state(6) to be set 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 [2024-12-15 05:07:50.986940] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b87c60 is same with the state(6) to be set 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Write completed with error (sct=0, sc=8) 00:07:37.379 Read completed with error (sct=0, sc=8) 00:07:37.379 [2024-12-15 05:07:50.987112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bdc5f0 is same with the state(6) to be set 00:07:37.380 Write completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Write completed with error (sct=0, sc=8) 00:07:37.380 Write completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Write completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Write completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Write completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 [2024-12-15 05:07:50.989261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f122800d800 is same with the state(6) to be set 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Write completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Write completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Write completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Write completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Write completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 Read completed with error (sct=0, sc=8) 00:07:37.380 [2024-12-15 05:07:50.990730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f122800d060 is same with the state(6) to be set 00:07:37.380 Initializing NVMe Controllers 00:07:37.380 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:37.380 Controller IO queue size 128, less than required. 00:07:37.380 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:37.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:37.380 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:37.380 Initialization complete. Launching workers. 00:07:37.380 ======================================================== 00:07:37.380 Latency(us) 00:07:37.380 Device Information : IOPS MiB/s Average min max 00:07:37.380 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 176.75 0.09 927040.12 304.03 1007399.98 00:07:37.380 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 163.31 0.08 909970.19 385.94 1010841.59 00:07:37.380 ======================================================== 00:07:37.380 Total : 340.06 0.17 918842.56 304.03 1010841.59 00:07:37.380 00:07:37.380 [2024-12-15 05:07:50.991346] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b85260 (9): Bad file descriptor 00:07:37.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:37.380 05:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.380 05:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:37.380 05:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 137151 00:07:37.380 05:07:50 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 137151 00:07:37.947 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (137151) - No such process 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 137151 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 137151 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 137151 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.947 [2024-12-15 05:07:51.522471] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=137823 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137823 00:07:37.947 05:07:51 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:37.947 [2024-12-15 05:07:51.609392] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:38.514 05:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:38.514 05:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137823 00:07:38.514 05:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.081 05:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.081 05:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137823 00:07:39.081 05:07:52 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.648 05:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.648 05:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137823 00:07:39.648 05:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:39.907 05:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:39.907 05:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137823 00:07:39.907 05:07:53 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:40.475 05:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:40.475 05:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137823 00:07:40.475 05:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.059 05:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.059 05:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137823 00:07:41.059 05:07:54 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:41.059 Initializing NVMe Controllers 00:07:41.059 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:41.059 Controller IO queue size 128, less than required. 00:07:41.059 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:41.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:41.059 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:41.059 Initialization complete. Launching workers. 00:07:41.059 ======================================================== 00:07:41.059 Latency(us) 00:07:41.059 Device Information : IOPS MiB/s Average min max 00:07:41.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001885.65 1000121.57 1005595.15 00:07:41.059 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003841.22 1000186.54 1009716.11 00:07:41.059 ======================================================== 00:07:41.059 Total : 256.00 0.12 1002863.43 1000121.57 1009716.11 00:07:41.059 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 137823 00:07:41.628 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (137823) - No such process 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 137823 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:41.628 rmmod nvme_tcp 00:07:41.628 rmmod nvme_fabrics 00:07:41.628 rmmod nvme_keyring 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 137126 ']' 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 137126 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 137126 ']' 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 137126 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 137126 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 137126' 00:07:41.628 killing process with pid 137126 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 137126 00:07:41.628 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 137126 00:07:41.887 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:41.888 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:41.888 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:41.888 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:07:41.888 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:07:41.888 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:41.888 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:07:41.888 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:41.888 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:41.888 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:41.888 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:41.888 05:07:55 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:43.797 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:43.797 00:07:43.797 real 0m16.272s 00:07:43.797 user 0m29.267s 00:07:43.797 sys 0m5.401s 00:07:43.797 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:43.797 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.797 ************************************ 00:07:43.797 END TEST nvmf_delete_subsystem 00:07:43.797 ************************************ 00:07:43.797 05:07:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:43.797 05:07:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:43.797 05:07:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:43.797 05:07:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:44.057 ************************************ 00:07:44.057 START TEST nvmf_host_management 00:07:44.057 ************************************ 00:07:44.057 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:44.057 * Looking for test storage... 00:07:44.057 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:44.057 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:44.057 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:07:44.057 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:44.057 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:44.057 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:44.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.058 --rc genhtml_branch_coverage=1 00:07:44.058 --rc genhtml_function_coverage=1 00:07:44.058 --rc genhtml_legend=1 00:07:44.058 --rc geninfo_all_blocks=1 00:07:44.058 --rc geninfo_unexecuted_blocks=1 00:07:44.058 00:07:44.058 ' 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:44.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.058 --rc genhtml_branch_coverage=1 00:07:44.058 --rc genhtml_function_coverage=1 00:07:44.058 --rc genhtml_legend=1 00:07:44.058 --rc geninfo_all_blocks=1 00:07:44.058 --rc geninfo_unexecuted_blocks=1 00:07:44.058 00:07:44.058 ' 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:44.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.058 --rc genhtml_branch_coverage=1 00:07:44.058 --rc genhtml_function_coverage=1 00:07:44.058 --rc genhtml_legend=1 00:07:44.058 --rc geninfo_all_blocks=1 00:07:44.058 --rc geninfo_unexecuted_blocks=1 00:07:44.058 00:07:44.058 ' 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:44.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:44.058 --rc genhtml_branch_coverage=1 00:07:44.058 --rc genhtml_function_coverage=1 00:07:44.058 --rc genhtml_legend=1 00:07:44.058 --rc geninfo_all_blocks=1 00:07:44.058 --rc geninfo_unexecuted_blocks=1 00:07:44.058 00:07:44.058 ' 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.058 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:44.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:07:44.059 05:07:57 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:07:50.642 Found 0000:af:00.0 (0x8086 - 0x159b) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:07:50.642 Found 0000:af:00.1 (0x8086 - 0x159b) 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:07:50.642 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:07:50.643 Found net devices under 0000:af:00.0: cvl_0_0 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:07:50.643 Found net devices under 0000:af:00.1: cvl_0_1 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:07:50.643 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:50.643 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.368 ms 00:07:50.643 00:07:50.643 --- 10.0.0.2 ping statistics --- 00:07:50.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.643 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:50.643 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:50.643 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.123 ms 00:07:50.643 00:07:50.643 --- 10.0.0.1 ping statistics --- 00:07:50.643 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:50.643 rtt min/avg/max/mdev = 0.123/0.123/0.123/0.000 ms 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=141969 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 141969 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 141969 ']' 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.643 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:50.643 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.644 [2024-12-15 05:08:03.722387] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:50.644 [2024-12-15 05:08:03.722435] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:50.644 [2024-12-15 05:08:03.805503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:50.644 [2024-12-15 05:08:03.828957] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:50.644 [2024-12-15 05:08:03.829000] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:50.644 [2024-12-15 05:08:03.829008] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:50.644 [2024-12-15 05:08:03.829014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:50.644 [2024-12-15 05:08:03.829019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:50.644 [2024-12-15 05:08:03.830391] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:50.644 [2024-12-15 05:08:03.830499] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:50.644 [2024-12-15 05:08:03.830605] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:50.644 [2024-12-15 05:08:03.830607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.644 [2024-12-15 05:08:03.965875] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:50.644 05:08:03 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.644 Malloc0 00:07:50.644 [2024-12-15 05:08:04.042666] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=142016 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 142016 /var/tmp/bdevperf.sock 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 142016 ']' 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:50.644 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:50.644 { 00:07:50.644 "params": { 00:07:50.644 "name": "Nvme$subsystem", 00:07:50.644 "trtype": "$TEST_TRANSPORT", 00:07:50.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:50.644 "adrfam": "ipv4", 00:07:50.644 "trsvcid": "$NVMF_PORT", 00:07:50.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:50.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:50.644 "hdgst": ${hdgst:-false}, 00:07:50.644 "ddgst": ${ddgst:-false} 00:07:50.644 }, 00:07:50.644 "method": "bdev_nvme_attach_controller" 00:07:50.644 } 00:07:50.644 EOF 00:07:50.644 )") 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:50.644 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:50.644 "params": { 00:07:50.644 "name": "Nvme0", 00:07:50.644 "trtype": "tcp", 00:07:50.644 "traddr": "10.0.0.2", 00:07:50.644 "adrfam": "ipv4", 00:07:50.644 "trsvcid": "4420", 00:07:50.644 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:50.644 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:50.644 "hdgst": false, 00:07:50.644 "ddgst": false 00:07:50.644 }, 00:07:50.644 "method": "bdev_nvme_attach_controller" 00:07:50.644 }' 00:07:50.644 [2024-12-15 05:08:04.138950] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:50.644 [2024-12-15 05:08:04.138999] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142016 ] 00:07:50.644 [2024-12-15 05:08:04.214183] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:50.644 [2024-12-15 05:08:04.236527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:50.904 Running I/O for 10 seconds... 00:07:51.166 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:51.166 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:51.166 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:51.166 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.166 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.166 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=106 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 106 -ge 100 ']' 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.167 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.167 [2024-12-15 05:08:04.656646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656687] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656725] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656731] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656737] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656743] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656749] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656755] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656760] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656783] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656800] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656851] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656856] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.656867] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5a1590 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.658412] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.167 [2024-12-15 05:08:04.658444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.167 [2024-12-15 05:08:04.658454] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.167 [2024-12-15 05:08:04.658461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.167 [2024-12-15 05:08:04.658469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.167 [2024-12-15 05:08:04.658476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.167 [2024-12-15 05:08:04.658483] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:51.167 [2024-12-15 05:08:04.658489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.167 [2024-12-15 05:08:04.658496] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x5aed40 is same with the state(6) to be set 00:07:51.167 [2024-12-15 05:08:04.658792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.167 [2024-12-15 05:08:04.658804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.167 [2024-12-15 05:08:04.658817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.167 [2024-12-15 05:08:04.658824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.167 [2024-12-15 05:08:04.658833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.167 [2024-12-15 05:08:04.658840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.167 [2024-12-15 05:08:04.658848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.167 [2024-12-15 05:08:04.658855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.167 [2024-12-15 05:08:04.658863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.658869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.658877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.658884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.658892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.658899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.658907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.658913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.658921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.658932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.658940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.658947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.658955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.658962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.658970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.658976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.658985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.658997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.168 [2024-12-15 05:08:04.659367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.168 [2024-12-15 05:08:04.659374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:51.169 [2024-12-15 05:08:04.659765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.169 [2024-12-15 05:08:04.659788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:07:51.169 [2024-12-15 05:08:04.660711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:51.169 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.169 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:51.169 task offset: 24576 on job bdev=Nvme0n1 fails 00:07:51.169 00:07:51.169 Latency(us) 00:07:51.169 [2024-12-15T04:08:04.856Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:51.169 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:51.169 Job: Nvme0n1 ended in about 0.11 seconds with error 00:07:51.169 Verification LBA range: start 0x0 length 0x400 00:07:51.169 Nvme0n1 : 0.11 1755.72 109.73 585.24 0.00 25217.07 1669.61 26838.55 00:07:51.169 [2024-12-15T04:08:04.856Z] =================================================================================================================== 00:07:51.169 [2024-12-15T04:08:04.856Z] Total : 1755.72 109.73 585.24 0.00 25217.07 1669.61 26838.55 00:07:51.169 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:51.169 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:51.169 [2024-12-15 05:08:04.663039] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:51.169 [2024-12-15 05:08:04.663057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5aed40 (9): Bad file descriptor 00:07:51.170 [2024-12-15 05:08:04.664484] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:07:51.170 [2024-12-15 05:08:04.664551] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:07:51.170 [2024-12-15 05:08:04.664575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:51.170 [2024-12-15 05:08:04.664589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:07:51.170 [2024-12-15 05:08:04.664596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:07:51.170 [2024-12-15 05:08:04.664603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:07:51.170 [2024-12-15 05:08:04.664610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x5aed40 00:07:51.170 [2024-12-15 05:08:04.664628] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5aed40 (9): Bad file descriptor 00:07:51.170 [2024-12-15 05:08:04.664640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:07:51.170 [2024-12-15 05:08:04.664647] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:07:51.170 [2024-12-15 05:08:04.664655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:07:51.170 [2024-12-15 05:08:04.664663] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:07:51.170 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:51.170 05:08:04 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:52.108 05:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 142016 00:07:52.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (142016) - No such process 00:07:52.108 05:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:52.108 05:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:52.108 05:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:52.108 05:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:52.108 05:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:07:52.108 05:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:07:52.108 05:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:07:52.108 05:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:07:52.108 { 00:07:52.108 "params": { 00:07:52.108 "name": "Nvme$subsystem", 00:07:52.108 "trtype": "$TEST_TRANSPORT", 00:07:52.108 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:52.108 "adrfam": "ipv4", 00:07:52.108 "trsvcid": "$NVMF_PORT", 00:07:52.108 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:52.108 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:52.108 "hdgst": ${hdgst:-false}, 00:07:52.108 "ddgst": ${ddgst:-false} 00:07:52.108 }, 00:07:52.108 "method": "bdev_nvme_attach_controller" 00:07:52.108 } 00:07:52.108 EOF 00:07:52.108 )") 00:07:52.108 05:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:07:52.108 05:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:07:52.108 05:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:07:52.108 05:08:05 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:07:52.108 "params": { 00:07:52.108 "name": "Nvme0", 00:07:52.108 "trtype": "tcp", 00:07:52.108 "traddr": "10.0.0.2", 00:07:52.108 "adrfam": "ipv4", 00:07:52.108 "trsvcid": "4420", 00:07:52.108 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:52.108 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:52.108 "hdgst": false, 00:07:52.108 "ddgst": false 00:07:52.108 }, 00:07:52.108 "method": "bdev_nvme_attach_controller" 00:07:52.109 }' 00:07:52.109 [2024-12-15 05:08:05.725121] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:07:52.109 [2024-12-15 05:08:05.725167] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142252 ] 00:07:52.368 [2024-12-15 05:08:05.801507] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:52.368 [2024-12-15 05:08:05.822378] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:52.368 Running I/O for 1 seconds... 00:07:53.748 2048.00 IOPS, 128.00 MiB/s 00:07:53.748 Latency(us) 00:07:53.748 [2024-12-15T04:08:07.435Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:53.748 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:53.748 Verification LBA range: start 0x0 length 0x400 00:07:53.748 Nvme0n1 : 1.01 2083.15 130.20 0.00 0.00 30239.49 4056.99 27337.87 00:07:53.748 [2024-12-15T04:08:07.435Z] =================================================================================================================== 00:07:53.748 [2024-12-15T04:08:07.435Z] Total : 2083.15 130.20 0.00 0.00 30239.49 4056.99 27337.87 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:07:53.748 rmmod nvme_tcp 00:07:53.748 rmmod nvme_fabrics 00:07:53.748 rmmod nvme_keyring 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 141969 ']' 00:07:53.748 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 141969 00:07:53.749 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 141969 ']' 00:07:53.749 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 141969 00:07:53.749 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:53.749 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:53.749 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 141969 00:07:53.749 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:53.749 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:53.749 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 141969' 00:07:53.749 killing process with pid 141969 00:07:53.749 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 141969 00:07:53.749 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 141969 00:07:54.009 [2024-12-15 05:08:07.484556] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:54.009 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:07:54.009 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:07:54.009 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:07:54.009 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:07:54.009 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:07:54.009 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:07:54.009 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:07:54.009 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:07:54.009 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:07:54.009 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:54.009 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:54.009 05:08:07 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:55.918 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:07:55.919 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:55.919 00:07:55.919 real 0m12.096s 00:07:55.919 user 0m18.420s 00:07:55.919 sys 0m5.438s 00:07:55.919 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:55.919 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:55.919 ************************************ 00:07:55.919 END TEST nvmf_host_management 00:07:55.919 ************************************ 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:56.179 ************************************ 00:07:56.179 START TEST nvmf_lvol 00:07:56.179 ************************************ 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:56.179 * Looking for test storage... 00:07:56.179 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:56.179 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:56.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.180 --rc genhtml_branch_coverage=1 00:07:56.180 --rc genhtml_function_coverage=1 00:07:56.180 --rc genhtml_legend=1 00:07:56.180 --rc geninfo_all_blocks=1 00:07:56.180 --rc geninfo_unexecuted_blocks=1 00:07:56.180 00:07:56.180 ' 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:56.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.180 --rc genhtml_branch_coverage=1 00:07:56.180 --rc genhtml_function_coverage=1 00:07:56.180 --rc genhtml_legend=1 00:07:56.180 --rc geninfo_all_blocks=1 00:07:56.180 --rc geninfo_unexecuted_blocks=1 00:07:56.180 00:07:56.180 ' 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:56.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.180 --rc genhtml_branch_coverage=1 00:07:56.180 --rc genhtml_function_coverage=1 00:07:56.180 --rc genhtml_legend=1 00:07:56.180 --rc geninfo_all_blocks=1 00:07:56.180 --rc geninfo_unexecuted_blocks=1 00:07:56.180 00:07:56.180 ' 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:56.180 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:56.180 --rc genhtml_branch_coverage=1 00:07:56.180 --rc genhtml_function_coverage=1 00:07:56.180 --rc genhtml_legend=1 00:07:56.180 --rc geninfo_all_blocks=1 00:07:56.180 --rc geninfo_unexecuted_blocks=1 00:07:56.180 00:07:56.180 ' 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:07:56.180 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:56.180 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:07:56.441 05:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:03.078 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:03.079 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:03.079 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:03.079 Found net devices under 0000:af:00.0: cvl_0_0 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:03.079 Found net devices under 0000:af:00.1: cvl_0_1 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:03.079 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:03.079 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:08:03.079 00:08:03.079 --- 10.0.0.2 ping statistics --- 00:08:03.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.079 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:03.079 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:03.079 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:08:03.079 00:08:03.079 --- 10.0.0.1 ping statistics --- 00:08:03.079 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:03.079 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=146105 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 146105 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 146105 ']' 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:03.079 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:03.079 05:08:15 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:03.079 [2024-12-15 05:08:15.885568] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:03.080 [2024-12-15 05:08:15.885611] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:03.080 [2024-12-15 05:08:15.964064] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:03.080 [2024-12-15 05:08:15.986771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:03.080 [2024-12-15 05:08:15.986809] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:03.080 [2024-12-15 05:08:15.986816] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:03.080 [2024-12-15 05:08:15.986824] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:03.080 [2024-12-15 05:08:15.986829] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:03.080 [2024-12-15 05:08:15.988122] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:03.080 [2024-12-15 05:08:15.988228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:03.080 [2024-12-15 05:08:15.988230] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:03.080 05:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:03.080 05:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:08:03.080 05:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:03.080 05:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:03.080 05:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:03.080 05:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:03.080 05:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:03.080 [2024-12-15 05:08:16.280961] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:03.080 05:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:03.080 05:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:03.080 05:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:03.080 05:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:03.080 05:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:03.339 05:08:16 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:03.598 05:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e29a8426-3ab1-42b5-83e0-48733c88bc24 00:08:03.598 05:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e29a8426-3ab1-42b5-83e0-48733c88bc24 lvol 20 00:08:03.857 05:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=02714997-b9e8-40e6-b2b4-a37f2d9376a4 00:08:03.857 05:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:03.857 05:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 02714997-b9e8-40e6-b2b4-a37f2d9376a4 00:08:04.115 05:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:04.374 [2024-12-15 05:08:17.884995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:04.374 05:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:04.633 05:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=146445 00:08:04.633 05:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:04.633 05:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:05.569 05:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 02714997-b9e8-40e6-b2b4-a37f2d9376a4 MY_SNAPSHOT 00:08:05.829 05:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=769e119f-7c49-49c2-b017-10ad9a98f5e3 00:08:05.829 05:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 02714997-b9e8-40e6-b2b4-a37f2d9376a4 30 00:08:06.088 05:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 769e119f-7c49-49c2-b017-10ad9a98f5e3 MY_CLONE 00:08:06.347 05:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=6b72410d-7dbc-422a-982a-e6e884b3c97f 00:08:06.347 05:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 6b72410d-7dbc-422a-982a-e6e884b3c97f 00:08:06.915 05:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 146445 00:08:15.035 Initializing NVMe Controllers 00:08:15.035 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:15.035 Controller IO queue size 128, less than required. 00:08:15.035 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:15.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:15.035 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:15.035 Initialization complete. Launching workers. 00:08:15.035 ======================================================== 00:08:15.035 Latency(us) 00:08:15.035 Device Information : IOPS MiB/s Average min max 00:08:15.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12579.30 49.14 10177.84 1410.02 52431.84 00:08:15.035 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12424.60 48.53 10300.73 3456.57 52697.09 00:08:15.035 ======================================================== 00:08:15.035 Total : 25003.90 97.67 10238.90 1410.02 52697.09 00:08:15.035 00:08:15.035 05:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:15.294 05:08:28 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 02714997-b9e8-40e6-b2b4-a37f2d9376a4 00:08:15.552 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e29a8426-3ab1-42b5-83e0-48733c88bc24 00:08:15.552 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:15.552 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:15.552 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:15.552 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:15.552 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:08:15.552 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:15.552 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:08:15.552 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:15.552 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:15.553 rmmod nvme_tcp 00:08:15.812 rmmod nvme_fabrics 00:08:15.812 rmmod nvme_keyring 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 146105 ']' 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 146105 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 146105 ']' 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 146105 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 146105 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 146105' 00:08:15.812 killing process with pid 146105 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 146105 00:08:15.812 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 146105 00:08:16.071 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:16.071 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:16.071 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:16.071 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:08:16.071 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:08:16.071 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:16.071 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:08:16.071 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:16.071 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:16.071 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.071 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.071 05:08:29 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:17.977 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:08:17.977 00:08:17.977 real 0m21.965s 00:08:17.977 user 1m3.216s 00:08:17.977 sys 0m7.642s 00:08:17.977 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:17.977 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:17.977 ************************************ 00:08:17.977 END TEST nvmf_lvol 00:08:17.977 ************************************ 00:08:17.977 05:08:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.238 ************************************ 00:08:18.238 START TEST nvmf_lvs_grow 00:08:18.238 ************************************ 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:18.238 * Looking for test storage... 00:08:18.238 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:18.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.238 --rc genhtml_branch_coverage=1 00:08:18.238 --rc genhtml_function_coverage=1 00:08:18.238 --rc genhtml_legend=1 00:08:18.238 --rc geninfo_all_blocks=1 00:08:18.238 --rc geninfo_unexecuted_blocks=1 00:08:18.238 00:08:18.238 ' 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:18.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.238 --rc genhtml_branch_coverage=1 00:08:18.238 --rc genhtml_function_coverage=1 00:08:18.238 --rc genhtml_legend=1 00:08:18.238 --rc geninfo_all_blocks=1 00:08:18.238 --rc geninfo_unexecuted_blocks=1 00:08:18.238 00:08:18.238 ' 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:18.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.238 --rc genhtml_branch_coverage=1 00:08:18.238 --rc genhtml_function_coverage=1 00:08:18.238 --rc genhtml_legend=1 00:08:18.238 --rc geninfo_all_blocks=1 00:08:18.238 --rc geninfo_unexecuted_blocks=1 00:08:18.238 00:08:18.238 ' 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:18.238 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:18.238 --rc genhtml_branch_coverage=1 00:08:18.238 --rc genhtml_function_coverage=1 00:08:18.238 --rc genhtml_legend=1 00:08:18.238 --rc geninfo_all_blocks=1 00:08:18.238 --rc geninfo_unexecuted_blocks=1 00:08:18.238 00:08:18.238 ' 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.238 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:08:18.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:08:18.239 05:08:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:08:24.822 Found 0000:af:00.0 (0x8086 - 0x159b) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:08:24.822 Found 0000:af:00.1 (0x8086 - 0x159b) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:08:24.822 Found net devices under 0000:af:00.0: cvl_0_0 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:08:24.822 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:08:24.823 Found net devices under 0000:af:00.1: cvl_0_1 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:08:24.823 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:24.823 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.395 ms 00:08:24.823 00:08:24.823 --- 10.0.0.2 ping statistics --- 00:08:24.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.823 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:24.823 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:24.823 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:08:24.823 00:08:24.823 --- 10.0.0.1 ping statistics --- 00:08:24.823 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:24.823 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=151928 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 151928 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 151928 ']' 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.823 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.823 05:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.823 [2024-12-15 05:08:37.934457] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:24.823 [2024-12-15 05:08:37.934504] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:24.823 [2024-12-15 05:08:38.011506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:24.823 [2024-12-15 05:08:38.033395] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:24.823 [2024-12-15 05:08:38.033430] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:24.823 [2024-12-15 05:08:38.033436] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:24.823 [2024-12-15 05:08:38.033442] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:24.823 [2024-12-15 05:08:38.033447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:24.823 [2024-12-15 05:08:38.033946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:24.823 [2024-12-15 05:08:38.325644] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:24.823 ************************************ 00:08:24.823 START TEST lvs_grow_clean 00:08:24.823 ************************************ 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:24.823 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:25.084 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:25.084 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:25.342 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b 00:08:25.342 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b 00:08:25.342 05:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:25.342 05:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:25.342 05:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:25.601 05:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b lvol 150 00:08:25.601 05:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=d4b062fc-9269-44d4-a44b-977b14438670 00:08:25.601 05:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:25.601 05:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:25.860 [2024-12-15 05:08:39.383534] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:25.860 [2024-12-15 05:08:39.383578] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:25.860 true 00:08:25.860 05:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b 00:08:25.860 05:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:26.120 05:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:26.120 05:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:26.120 05:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 d4b062fc-9269-44d4-a44b-977b14438670 00:08:26.378 05:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:26.637 [2024-12-15 05:08:40.137798] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:26.637 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:26.897 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=152344 00:08:26.897 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:26.897 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:26.897 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 152344 /var/tmp/bdevperf.sock 00:08:26.897 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 152344 ']' 00:08:26.897 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:26.897 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:26.897 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:26.897 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:26.897 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:26.897 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:26.897 [2024-12-15 05:08:40.386838] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:26.897 [2024-12-15 05:08:40.386889] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152344 ] 00:08:26.897 [2024-12-15 05:08:40.460304] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:26.897 [2024-12-15 05:08:40.482818] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:26.897 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:26.897 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:08:26.897 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:27.466 Nvme0n1 00:08:27.466 05:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:27.466 [ 00:08:27.466 { 00:08:27.466 "name": "Nvme0n1", 00:08:27.466 "aliases": [ 00:08:27.466 "d4b062fc-9269-44d4-a44b-977b14438670" 00:08:27.466 ], 00:08:27.466 "product_name": "NVMe disk", 00:08:27.466 "block_size": 4096, 00:08:27.466 "num_blocks": 38912, 00:08:27.466 "uuid": "d4b062fc-9269-44d4-a44b-977b14438670", 00:08:27.466 "numa_id": 1, 00:08:27.466 "assigned_rate_limits": { 00:08:27.466 "rw_ios_per_sec": 0, 00:08:27.466 "rw_mbytes_per_sec": 0, 00:08:27.466 "r_mbytes_per_sec": 0, 00:08:27.466 "w_mbytes_per_sec": 0 00:08:27.466 }, 00:08:27.466 "claimed": false, 00:08:27.466 "zoned": false, 00:08:27.466 "supported_io_types": { 00:08:27.466 "read": true, 00:08:27.466 "write": true, 00:08:27.466 "unmap": true, 00:08:27.466 "flush": true, 00:08:27.466 "reset": true, 00:08:27.466 "nvme_admin": true, 00:08:27.466 "nvme_io": true, 00:08:27.466 "nvme_io_md": false, 00:08:27.466 "write_zeroes": true, 00:08:27.466 "zcopy": false, 00:08:27.466 "get_zone_info": false, 00:08:27.466 "zone_management": false, 00:08:27.466 "zone_append": false, 00:08:27.466 "compare": true, 00:08:27.466 "compare_and_write": true, 00:08:27.466 "abort": true, 00:08:27.466 "seek_hole": false, 00:08:27.466 "seek_data": false, 00:08:27.466 "copy": true, 00:08:27.466 "nvme_iov_md": false 00:08:27.466 }, 00:08:27.466 "memory_domains": [ 00:08:27.466 { 00:08:27.466 "dma_device_id": "system", 00:08:27.466 "dma_device_type": 1 00:08:27.466 } 00:08:27.466 ], 00:08:27.466 "driver_specific": { 00:08:27.466 "nvme": [ 00:08:27.466 { 00:08:27.466 "trid": { 00:08:27.466 "trtype": "TCP", 00:08:27.466 "adrfam": "IPv4", 00:08:27.466 "traddr": "10.0.0.2", 00:08:27.466 "trsvcid": "4420", 00:08:27.466 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:27.466 }, 00:08:27.466 "ctrlr_data": { 00:08:27.466 "cntlid": 1, 00:08:27.466 "vendor_id": "0x8086", 00:08:27.466 "model_number": "SPDK bdev Controller", 00:08:27.466 "serial_number": "SPDK0", 00:08:27.466 "firmware_revision": "25.01", 00:08:27.466 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:27.466 "oacs": { 00:08:27.466 "security": 0, 00:08:27.466 "format": 0, 00:08:27.466 "firmware": 0, 00:08:27.466 "ns_manage": 0 00:08:27.466 }, 00:08:27.466 "multi_ctrlr": true, 00:08:27.466 "ana_reporting": false 00:08:27.466 }, 00:08:27.466 "vs": { 00:08:27.466 "nvme_version": "1.3" 00:08:27.466 }, 00:08:27.466 "ns_data": { 00:08:27.466 "id": 1, 00:08:27.466 "can_share": true 00:08:27.466 } 00:08:27.466 } 00:08:27.466 ], 00:08:27.466 "mp_policy": "active_passive" 00:08:27.466 } 00:08:27.466 } 00:08:27.466 ] 00:08:27.466 05:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=152431 00:08:27.466 05:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:27.466 05:08:41 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:27.725 Running I/O for 10 seconds... 00:08:28.663 Latency(us) 00:08:28.663 [2024-12-15T04:08:42.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:28.663 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:28.663 Nvme0n1 : 1.00 23383.00 91.34 0.00 0.00 0.00 0.00 0.00 00:08:28.663 [2024-12-15T04:08:42.350Z] =================================================================================================================== 00:08:28.663 [2024-12-15T04:08:42.350Z] Total : 23383.00 91.34 0.00 0.00 0.00 0.00 0.00 00:08:28.663 00:08:29.600 05:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b 00:08:29.600 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:29.600 Nvme0n1 : 2.00 23547.00 91.98 0.00 0.00 0.00 0.00 0.00 00:08:29.600 [2024-12-15T04:08:43.287Z] =================================================================================================================== 00:08:29.600 [2024-12-15T04:08:43.287Z] Total : 23547.00 91.98 0.00 0.00 0.00 0.00 0.00 00:08:29.600 00:08:29.860 true 00:08:29.860 05:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b 00:08:29.860 05:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:29.860 05:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:29.860 05:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:29.860 05:08:43 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 152431 00:08:30.798 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:30.798 Nvme0n1 : 3.00 23620.33 92.27 0.00 0.00 0.00 0.00 0.00 00:08:30.798 [2024-12-15T04:08:44.485Z] =================================================================================================================== 00:08:30.798 [2024-12-15T04:08:44.485Z] Total : 23620.33 92.27 0.00 0.00 0.00 0.00 0.00 00:08:30.798 00:08:31.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:31.735 Nvme0n1 : 4.00 23677.00 92.49 0.00 0.00 0.00 0.00 0.00 00:08:31.735 [2024-12-15T04:08:45.422Z] =================================================================================================================== 00:08:31.735 [2024-12-15T04:08:45.422Z] Total : 23677.00 92.49 0.00 0.00 0.00 0.00 0.00 00:08:31.735 00:08:32.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:32.672 Nvme0n1 : 5.00 23713.00 92.63 0.00 0.00 0.00 0.00 0.00 00:08:32.672 [2024-12-15T04:08:46.359Z] =================================================================================================================== 00:08:32.672 [2024-12-15T04:08:46.359Z] Total : 23713.00 92.63 0.00 0.00 0.00 0.00 0.00 00:08:32.672 00:08:33.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:33.609 Nvme0n1 : 6.00 23756.50 92.80 0.00 0.00 0.00 0.00 0.00 00:08:33.609 [2024-12-15T04:08:47.296Z] =================================================================================================================== 00:08:33.609 [2024-12-15T04:08:47.296Z] Total : 23756.50 92.80 0.00 0.00 0.00 0.00 0.00 00:08:33.609 00:08:34.546 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:34.546 Nvme0n1 : 7.00 23783.86 92.91 0.00 0.00 0.00 0.00 0.00 00:08:34.546 [2024-12-15T04:08:48.233Z] =================================================================================================================== 00:08:34.546 [2024-12-15T04:08:48.233Z] Total : 23783.86 92.91 0.00 0.00 0.00 0.00 0.00 00:08:34.546 00:08:35.925 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:35.925 Nvme0n1 : 8.00 23799.88 92.97 0.00 0.00 0.00 0.00 0.00 00:08:35.925 [2024-12-15T04:08:49.612Z] =================================================================================================================== 00:08:35.925 [2024-12-15T04:08:49.612Z] Total : 23799.88 92.97 0.00 0.00 0.00 0.00 0.00 00:08:35.925 00:08:36.862 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.862 Nvme0n1 : 9.00 23794.56 92.95 0.00 0.00 0.00 0.00 0.00 00:08:36.862 [2024-12-15T04:08:50.549Z] =================================================================================================================== 00:08:36.862 [2024-12-15T04:08:50.549Z] Total : 23794.56 92.95 0.00 0.00 0.00 0.00 0.00 00:08:36.862 00:08:37.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.800 Nvme0n1 : 10.00 23797.20 92.96 0.00 0.00 0.00 0.00 0.00 00:08:37.800 [2024-12-15T04:08:51.487Z] =================================================================================================================== 00:08:37.800 [2024-12-15T04:08:51.487Z] Total : 23797.20 92.96 0.00 0.00 0.00 0.00 0.00 00:08:37.800 00:08:37.800 00:08:37.800 Latency(us) 00:08:37.800 [2024-12-15T04:08:51.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.800 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.800 Nvme0n1 : 10.00 23802.63 92.98 0.00 0.00 5374.56 3183.18 10922.67 00:08:37.800 [2024-12-15T04:08:51.487Z] =================================================================================================================== 00:08:37.800 [2024-12-15T04:08:51.487Z] Total : 23802.63 92.98 0.00 0.00 5374.56 3183.18 10922.67 00:08:37.800 { 00:08:37.800 "results": [ 00:08:37.800 { 00:08:37.800 "job": "Nvme0n1", 00:08:37.800 "core_mask": "0x2", 00:08:37.800 "workload": "randwrite", 00:08:37.800 "status": "finished", 00:08:37.800 "queue_depth": 128, 00:08:37.800 "io_size": 4096, 00:08:37.800 "runtime": 10.003096, 00:08:37.800 "iops": 23802.630705533567, 00:08:37.800 "mibps": 92.9790261934905, 00:08:37.800 "io_failed": 0, 00:08:37.800 "io_timeout": 0, 00:08:37.800 "avg_latency_us": 5374.5644544389115, 00:08:37.800 "min_latency_us": 3183.177142857143, 00:08:37.800 "max_latency_us": 10922.666666666666 00:08:37.800 } 00:08:37.800 ], 00:08:37.800 "core_count": 1 00:08:37.800 } 00:08:37.800 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 152344 00:08:37.800 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 152344 ']' 00:08:37.800 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 152344 00:08:37.800 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:08:37.800 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.800 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 152344 00:08:37.800 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:37.800 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:37.800 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 152344' 00:08:37.800 killing process with pid 152344 00:08:37.800 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 152344 00:08:37.800 Received shutdown signal, test time was about 10.000000 seconds 00:08:37.800 00:08:37.800 Latency(us) 00:08:37.800 [2024-12-15T04:08:51.487Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:37.800 [2024-12-15T04:08:51.487Z] =================================================================================================================== 00:08:37.800 [2024-12-15T04:08:51.487Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:37.800 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 152344 00:08:37.800 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:38.059 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:38.318 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b 00:08:38.318 05:08:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:38.577 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:38.577 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:38.577 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:38.577 [2024-12-15 05:08:52.236654] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b 00:08:38.837 request: 00:08:38.837 { 00:08:38.837 "uuid": "5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b", 00:08:38.837 "method": "bdev_lvol_get_lvstores", 00:08:38.837 "req_id": 1 00:08:38.837 } 00:08:38.837 Got JSON-RPC error response 00:08:38.837 response: 00:08:38.837 { 00:08:38.837 "code": -19, 00:08:38.837 "message": "No such device" 00:08:38.837 } 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:38.837 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:39.097 aio_bdev 00:08:39.097 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev d4b062fc-9269-44d4-a44b-977b14438670 00:08:39.097 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=d4b062fc-9269-44d4-a44b-977b14438670 00:08:39.097 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:39.097 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:08:39.097 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:39.097 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:39.097 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:39.356 05:08:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b d4b062fc-9269-44d4-a44b-977b14438670 -t 2000 00:08:39.356 [ 00:08:39.356 { 00:08:39.356 "name": "d4b062fc-9269-44d4-a44b-977b14438670", 00:08:39.356 "aliases": [ 00:08:39.356 "lvs/lvol" 00:08:39.356 ], 00:08:39.356 "product_name": "Logical Volume", 00:08:39.356 "block_size": 4096, 00:08:39.356 "num_blocks": 38912, 00:08:39.356 "uuid": "d4b062fc-9269-44d4-a44b-977b14438670", 00:08:39.356 "assigned_rate_limits": { 00:08:39.356 "rw_ios_per_sec": 0, 00:08:39.356 "rw_mbytes_per_sec": 0, 00:08:39.356 "r_mbytes_per_sec": 0, 00:08:39.356 "w_mbytes_per_sec": 0 00:08:39.356 }, 00:08:39.356 "claimed": false, 00:08:39.356 "zoned": false, 00:08:39.356 "supported_io_types": { 00:08:39.356 "read": true, 00:08:39.356 "write": true, 00:08:39.356 "unmap": true, 00:08:39.356 "flush": false, 00:08:39.356 "reset": true, 00:08:39.356 "nvme_admin": false, 00:08:39.356 "nvme_io": false, 00:08:39.356 "nvme_io_md": false, 00:08:39.356 "write_zeroes": true, 00:08:39.356 "zcopy": false, 00:08:39.356 "get_zone_info": false, 00:08:39.356 "zone_management": false, 00:08:39.356 "zone_append": false, 00:08:39.356 "compare": false, 00:08:39.356 "compare_and_write": false, 00:08:39.356 "abort": false, 00:08:39.356 "seek_hole": true, 00:08:39.356 "seek_data": true, 00:08:39.356 "copy": false, 00:08:39.356 "nvme_iov_md": false 00:08:39.356 }, 00:08:39.356 "driver_specific": { 00:08:39.356 "lvol": { 00:08:39.356 "lvol_store_uuid": "5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b", 00:08:39.356 "base_bdev": "aio_bdev", 00:08:39.356 "thin_provision": false, 00:08:39.356 "num_allocated_clusters": 38, 00:08:39.356 "snapshot": false, 00:08:39.356 "clone": false, 00:08:39.356 "esnap_clone": false 00:08:39.356 } 00:08:39.356 } 00:08:39.356 } 00:08:39.356 ] 00:08:39.356 05:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:08:39.356 05:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b 00:08:39.356 05:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:39.616 05:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:39.616 05:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b 00:08:39.616 05:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:39.875 05:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:39.875 05:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete d4b062fc-9269-44d4-a44b-977b14438670 00:08:39.875 05:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 5a552f6f-1f5c-4228-b7b5-e7f3ccbd9a4b 00:08:40.135 05:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:40.395 05:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.395 00:08:40.395 real 0m15.602s 00:08:40.395 user 0m15.159s 00:08:40.395 sys 0m1.478s 00:08:40.395 05:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:40.395 05:08:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:40.395 ************************************ 00:08:40.395 END TEST lvs_grow_clean 00:08:40.395 ************************************ 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:40.395 ************************************ 00:08:40.395 START TEST lvs_grow_dirty 00:08:40.395 ************************************ 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:40.395 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:40.654 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:40.654 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:40.913 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:40.913 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:40.913 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:41.173 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:41.173 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:41.173 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 lvol 150 00:08:41.433 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=7c0c7fb5-e4da-446b-a354-b911db356654 00:08:41.433 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:41.433 05:08:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:41.433 [2024-12-15 05:08:55.020845] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:41.433 [2024-12-15 05:08:55.020892] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:41.433 true 00:08:41.433 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:41.433 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:41.692 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:41.692 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:41.951 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 7c0c7fb5-e4da-446b-a354-b911db356654 00:08:41.951 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:42.211 [2024-12-15 05:08:55.787127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:42.211 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:42.470 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:42.470 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=154950 00:08:42.470 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:42.470 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 154950 /var/tmp/bdevperf.sock 00:08:42.470 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 154950 ']' 00:08:42.470 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:42.470 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:42.470 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:42.470 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:42.470 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:42.470 05:08:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:42.470 [2024-12-15 05:08:56.014590] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:42.470 [2024-12-15 05:08:56.014637] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid154950 ] 00:08:42.470 [2024-12-15 05:08:56.087372] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:42.470 [2024-12-15 05:08:56.108930] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:42.730 05:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:42.730 05:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:42.730 05:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:42.990 Nvme0n1 00:08:42.990 05:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:43.250 [ 00:08:43.250 { 00:08:43.250 "name": "Nvme0n1", 00:08:43.250 "aliases": [ 00:08:43.250 "7c0c7fb5-e4da-446b-a354-b911db356654" 00:08:43.250 ], 00:08:43.250 "product_name": "NVMe disk", 00:08:43.250 "block_size": 4096, 00:08:43.250 "num_blocks": 38912, 00:08:43.250 "uuid": "7c0c7fb5-e4da-446b-a354-b911db356654", 00:08:43.250 "numa_id": 1, 00:08:43.250 "assigned_rate_limits": { 00:08:43.250 "rw_ios_per_sec": 0, 00:08:43.250 "rw_mbytes_per_sec": 0, 00:08:43.250 "r_mbytes_per_sec": 0, 00:08:43.250 "w_mbytes_per_sec": 0 00:08:43.250 }, 00:08:43.250 "claimed": false, 00:08:43.250 "zoned": false, 00:08:43.250 "supported_io_types": { 00:08:43.250 "read": true, 00:08:43.250 "write": true, 00:08:43.250 "unmap": true, 00:08:43.250 "flush": true, 00:08:43.250 "reset": true, 00:08:43.250 "nvme_admin": true, 00:08:43.250 "nvme_io": true, 00:08:43.250 "nvme_io_md": false, 00:08:43.250 "write_zeroes": true, 00:08:43.250 "zcopy": false, 00:08:43.250 "get_zone_info": false, 00:08:43.250 "zone_management": false, 00:08:43.250 "zone_append": false, 00:08:43.250 "compare": true, 00:08:43.250 "compare_and_write": true, 00:08:43.250 "abort": true, 00:08:43.250 "seek_hole": false, 00:08:43.250 "seek_data": false, 00:08:43.250 "copy": true, 00:08:43.250 "nvme_iov_md": false 00:08:43.250 }, 00:08:43.250 "memory_domains": [ 00:08:43.250 { 00:08:43.250 "dma_device_id": "system", 00:08:43.250 "dma_device_type": 1 00:08:43.250 } 00:08:43.250 ], 00:08:43.250 "driver_specific": { 00:08:43.250 "nvme": [ 00:08:43.250 { 00:08:43.250 "trid": { 00:08:43.250 "trtype": "TCP", 00:08:43.250 "adrfam": "IPv4", 00:08:43.250 "traddr": "10.0.0.2", 00:08:43.250 "trsvcid": "4420", 00:08:43.250 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:43.250 }, 00:08:43.250 "ctrlr_data": { 00:08:43.250 "cntlid": 1, 00:08:43.250 "vendor_id": "0x8086", 00:08:43.250 "model_number": "SPDK bdev Controller", 00:08:43.250 "serial_number": "SPDK0", 00:08:43.250 "firmware_revision": "25.01", 00:08:43.250 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:43.250 "oacs": { 00:08:43.250 "security": 0, 00:08:43.250 "format": 0, 00:08:43.250 "firmware": 0, 00:08:43.250 "ns_manage": 0 00:08:43.250 }, 00:08:43.250 "multi_ctrlr": true, 00:08:43.250 "ana_reporting": false 00:08:43.250 }, 00:08:43.250 "vs": { 00:08:43.250 "nvme_version": "1.3" 00:08:43.250 }, 00:08:43.250 "ns_data": { 00:08:43.250 "id": 1, 00:08:43.250 "can_share": true 00:08:43.250 } 00:08:43.250 } 00:08:43.250 ], 00:08:43.250 "mp_policy": "active_passive" 00:08:43.250 } 00:08:43.250 } 00:08:43.250 ] 00:08:43.250 05:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=155139 00:08:43.250 05:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:43.250 05:08:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:43.250 Running I/O for 10 seconds... 00:08:44.190 Latency(us) 00:08:44.190 [2024-12-15T04:08:57.877Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:44.190 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.190 Nvme0n1 : 1.00 23406.00 91.43 0.00 0.00 0.00 0.00 0.00 00:08:44.190 [2024-12-15T04:08:57.877Z] =================================================================================================================== 00:08:44.190 [2024-12-15T04:08:57.877Z] Total : 23406.00 91.43 0.00 0.00 0.00 0.00 0.00 00:08:44.190 00:08:45.129 05:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:45.129 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.129 Nvme0n1 : 2.00 23649.50 92.38 0.00 0.00 0.00 0.00 0.00 00:08:45.129 [2024-12-15T04:08:58.816Z] =================================================================================================================== 00:08:45.129 [2024-12-15T04:08:58.816Z] Total : 23649.50 92.38 0.00 0.00 0.00 0.00 0.00 00:08:45.129 00:08:45.389 true 00:08:45.389 05:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:45.389 05:08:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:45.649 05:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:45.649 05:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:45.649 05:08:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 155139 00:08:46.218 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:46.218 Nvme0n1 : 3.00 23687.33 92.53 0.00 0.00 0.00 0.00 0.00 00:08:46.218 [2024-12-15T04:08:59.905Z] =================================================================================================================== 00:08:46.218 [2024-12-15T04:08:59.905Z] Total : 23687.33 92.53 0.00 0.00 0.00 0.00 0.00 00:08:46.218 00:08:47.156 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:47.156 Nvme0n1 : 4.00 23592.75 92.16 0.00 0.00 0.00 0.00 0.00 00:08:47.156 [2024-12-15T04:09:00.843Z] =================================================================================================================== 00:08:47.156 [2024-12-15T04:09:00.843Z] Total : 23592.75 92.16 0.00 0.00 0.00 0.00 0.00 00:08:47.156 00:08:48.537 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:48.537 Nvme0n1 : 5.00 23631.40 92.31 0.00 0.00 0.00 0.00 0.00 00:08:48.537 [2024-12-15T04:09:02.224Z] =================================================================================================================== 00:08:48.537 [2024-12-15T04:09:02.224Z] Total : 23631.40 92.31 0.00 0.00 0.00 0.00 0.00 00:08:48.537 00:08:49.475 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:49.475 Nvme0n1 : 6.00 23671.83 92.47 0.00 0.00 0.00 0.00 0.00 00:08:49.475 [2024-12-15T04:09:03.162Z] =================================================================================================================== 00:08:49.475 [2024-12-15T04:09:03.162Z] Total : 23671.83 92.47 0.00 0.00 0.00 0.00 0.00 00:08:49.475 00:08:50.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:50.414 Nvme0n1 : 7.00 23711.14 92.62 0.00 0.00 0.00 0.00 0.00 00:08:50.414 [2024-12-15T04:09:04.101Z] =================================================================================================================== 00:08:50.414 [2024-12-15T04:09:04.101Z] Total : 23711.14 92.62 0.00 0.00 0.00 0.00 0.00 00:08:50.414 00:08:51.354 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:51.354 Nvme0n1 : 8.00 23747.62 92.76 0.00 0.00 0.00 0.00 0.00 00:08:51.354 [2024-12-15T04:09:05.041Z] =================================================================================================================== 00:08:51.354 [2024-12-15T04:09:05.041Z] Total : 23747.62 92.76 0.00 0.00 0.00 0.00 0.00 00:08:51.354 00:08:52.290 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.290 Nvme0n1 : 9.00 23776.89 92.88 0.00 0.00 0.00 0.00 0.00 00:08:52.290 [2024-12-15T04:09:05.977Z] =================================================================================================================== 00:08:52.290 [2024-12-15T04:09:05.977Z] Total : 23776.89 92.88 0.00 0.00 0.00 0.00 0.00 00:08:52.290 00:08:53.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.230 Nvme0n1 : 10.00 23794.70 92.95 0.00 0.00 0.00 0.00 0.00 00:08:53.230 [2024-12-15T04:09:06.917Z] =================================================================================================================== 00:08:53.230 [2024-12-15T04:09:06.917Z] Total : 23794.70 92.95 0.00 0.00 0.00 0.00 0.00 00:08:53.230 00:08:53.230 00:08:53.230 Latency(us) 00:08:53.230 [2024-12-15T04:09:06.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.230 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.230 Nvme0n1 : 10.00 23795.88 92.95 0.00 0.00 5375.98 3167.57 15853.47 00:08:53.230 [2024-12-15T04:09:06.917Z] =================================================================================================================== 00:08:53.230 [2024-12-15T04:09:06.917Z] Total : 23795.88 92.95 0.00 0.00 5375.98 3167.57 15853.47 00:08:53.230 { 00:08:53.230 "results": [ 00:08:53.230 { 00:08:53.230 "job": "Nvme0n1", 00:08:53.230 "core_mask": "0x2", 00:08:53.230 "workload": "randwrite", 00:08:53.230 "status": "finished", 00:08:53.230 "queue_depth": 128, 00:08:53.230 "io_size": 4096, 00:08:53.230 "runtime": 10.002195, 00:08:53.230 "iops": 23795.876805041295, 00:08:53.230 "mibps": 92.95264376969256, 00:08:53.230 "io_failed": 0, 00:08:53.230 "io_timeout": 0, 00:08:53.230 "avg_latency_us": 5375.983686004108, 00:08:53.230 "min_latency_us": 3167.5733333333333, 00:08:53.230 "max_latency_us": 15853.470476190476 00:08:53.230 } 00:08:53.230 ], 00:08:53.230 "core_count": 1 00:08:53.230 } 00:08:53.230 05:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 154950 00:08:53.230 05:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 154950 ']' 00:08:53.230 05:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 154950 00:08:53.230 05:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:53.230 05:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:53.230 05:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 154950 00:08:53.230 05:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:53.230 05:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:53.230 05:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 154950' 00:08:53.230 killing process with pid 154950 00:08:53.230 05:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 154950 00:08:53.230 Received shutdown signal, test time was about 10.000000 seconds 00:08:53.230 00:08:53.230 Latency(us) 00:08:53.230 [2024-12-15T04:09:06.917Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:53.230 [2024-12-15T04:09:06.917Z] =================================================================================================================== 00:08:53.230 [2024-12-15T04:09:06.917Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:53.230 05:09:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 154950 00:08:53.489 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:53.747 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:54.007 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:54.007 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:54.007 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:54.007 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:54.007 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 151928 00:08:54.007 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 151928 00:08:54.267 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 151928 Killed "${NVMF_APP[@]}" "$@" 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=157485 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 157485 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 157485 ']' 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:54.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:54.267 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.267 [2024-12-15 05:09:07.773007] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:08:54.267 [2024-12-15 05:09:07.773050] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:54.267 [2024-12-15 05:09:07.851318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:54.267 [2024-12-15 05:09:07.872600] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:54.267 [2024-12-15 05:09:07.872632] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:54.267 [2024-12-15 05:09:07.872639] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:54.267 [2024-12-15 05:09:07.872645] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:54.267 [2024-12-15 05:09:07.872654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:54.267 [2024-12-15 05:09:07.873166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:54.525 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:54.525 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:54.525 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:08:54.525 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:54.525 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:54.525 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:54.525 05:09:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:54.525 [2024-12-15 05:09:08.169098] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:54.525 [2024-12-15 05:09:08.169196] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:54.525 [2024-12-15 05:09:08.169221] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:54.525 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:54.525 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 7c0c7fb5-e4da-446b-a354-b911db356654 00:08:54.525 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7c0c7fb5-e4da-446b-a354-b911db356654 00:08:54.525 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:54.525 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:54.526 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:54.526 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:54.526 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:54.785 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7c0c7fb5-e4da-446b-a354-b911db356654 -t 2000 00:08:55.044 [ 00:08:55.044 { 00:08:55.044 "name": "7c0c7fb5-e4da-446b-a354-b911db356654", 00:08:55.044 "aliases": [ 00:08:55.044 "lvs/lvol" 00:08:55.044 ], 00:08:55.044 "product_name": "Logical Volume", 00:08:55.044 "block_size": 4096, 00:08:55.044 "num_blocks": 38912, 00:08:55.044 "uuid": "7c0c7fb5-e4da-446b-a354-b911db356654", 00:08:55.044 "assigned_rate_limits": { 00:08:55.044 "rw_ios_per_sec": 0, 00:08:55.044 "rw_mbytes_per_sec": 0, 00:08:55.044 "r_mbytes_per_sec": 0, 00:08:55.044 "w_mbytes_per_sec": 0 00:08:55.044 }, 00:08:55.044 "claimed": false, 00:08:55.044 "zoned": false, 00:08:55.044 "supported_io_types": { 00:08:55.044 "read": true, 00:08:55.044 "write": true, 00:08:55.044 "unmap": true, 00:08:55.044 "flush": false, 00:08:55.044 "reset": true, 00:08:55.044 "nvme_admin": false, 00:08:55.044 "nvme_io": false, 00:08:55.044 "nvme_io_md": false, 00:08:55.044 "write_zeroes": true, 00:08:55.044 "zcopy": false, 00:08:55.044 "get_zone_info": false, 00:08:55.044 "zone_management": false, 00:08:55.044 "zone_append": false, 00:08:55.044 "compare": false, 00:08:55.044 "compare_and_write": false, 00:08:55.044 "abort": false, 00:08:55.044 "seek_hole": true, 00:08:55.044 "seek_data": true, 00:08:55.044 "copy": false, 00:08:55.044 "nvme_iov_md": false 00:08:55.044 }, 00:08:55.044 "driver_specific": { 00:08:55.044 "lvol": { 00:08:55.044 "lvol_store_uuid": "e7ce5cb1-e47d-4020-b808-cfc841b084f0", 00:08:55.044 "base_bdev": "aio_bdev", 00:08:55.044 "thin_provision": false, 00:08:55.044 "num_allocated_clusters": 38, 00:08:55.044 "snapshot": false, 00:08:55.044 "clone": false, 00:08:55.044 "esnap_clone": false 00:08:55.044 } 00:08:55.044 } 00:08:55.044 } 00:08:55.044 ] 00:08:55.044 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:55.044 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:55.044 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:55.314 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:55.314 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:55.314 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:55.314 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:55.314 05:09:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:55.573 [2024-12-15 05:09:09.134015] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:55.573 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:55.573 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:55.573 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:55.573 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.573 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.573 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.573 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.573 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.573 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:55.573 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:55.574 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:55.574 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:55.833 request: 00:08:55.833 { 00:08:55.833 "uuid": "e7ce5cb1-e47d-4020-b808-cfc841b084f0", 00:08:55.833 "method": "bdev_lvol_get_lvstores", 00:08:55.833 "req_id": 1 00:08:55.833 } 00:08:55.833 Got JSON-RPC error response 00:08:55.833 response: 00:08:55.833 { 00:08:55.833 "code": -19, 00:08:55.833 "message": "No such device" 00:08:55.833 } 00:08:55.833 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:55.833 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:55.833 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:55.833 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:55.833 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:56.092 aio_bdev 00:08:56.092 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 7c0c7fb5-e4da-446b-a354-b911db356654 00:08:56.092 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=7c0c7fb5-e4da-446b-a354-b911db356654 00:08:56.092 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:56.092 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:56.092 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:56.092 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:56.092 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:56.092 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 7c0c7fb5-e4da-446b-a354-b911db356654 -t 2000 00:08:56.352 [ 00:08:56.352 { 00:08:56.352 "name": "7c0c7fb5-e4da-446b-a354-b911db356654", 00:08:56.352 "aliases": [ 00:08:56.352 "lvs/lvol" 00:08:56.352 ], 00:08:56.352 "product_name": "Logical Volume", 00:08:56.352 "block_size": 4096, 00:08:56.352 "num_blocks": 38912, 00:08:56.352 "uuid": "7c0c7fb5-e4da-446b-a354-b911db356654", 00:08:56.352 "assigned_rate_limits": { 00:08:56.352 "rw_ios_per_sec": 0, 00:08:56.352 "rw_mbytes_per_sec": 0, 00:08:56.352 "r_mbytes_per_sec": 0, 00:08:56.352 "w_mbytes_per_sec": 0 00:08:56.352 }, 00:08:56.352 "claimed": false, 00:08:56.352 "zoned": false, 00:08:56.352 "supported_io_types": { 00:08:56.352 "read": true, 00:08:56.352 "write": true, 00:08:56.352 "unmap": true, 00:08:56.352 "flush": false, 00:08:56.352 "reset": true, 00:08:56.352 "nvme_admin": false, 00:08:56.352 "nvme_io": false, 00:08:56.352 "nvme_io_md": false, 00:08:56.352 "write_zeroes": true, 00:08:56.352 "zcopy": false, 00:08:56.352 "get_zone_info": false, 00:08:56.352 "zone_management": false, 00:08:56.352 "zone_append": false, 00:08:56.352 "compare": false, 00:08:56.352 "compare_and_write": false, 00:08:56.352 "abort": false, 00:08:56.352 "seek_hole": true, 00:08:56.352 "seek_data": true, 00:08:56.352 "copy": false, 00:08:56.352 "nvme_iov_md": false 00:08:56.352 }, 00:08:56.352 "driver_specific": { 00:08:56.352 "lvol": { 00:08:56.352 "lvol_store_uuid": "e7ce5cb1-e47d-4020-b808-cfc841b084f0", 00:08:56.352 "base_bdev": "aio_bdev", 00:08:56.352 "thin_provision": false, 00:08:56.352 "num_allocated_clusters": 38, 00:08:56.352 "snapshot": false, 00:08:56.352 "clone": false, 00:08:56.352 "esnap_clone": false 00:08:56.352 } 00:08:56.352 } 00:08:56.352 } 00:08:56.352 ] 00:08:56.352 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:56.352 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:56.352 05:09:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:56.612 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:56.612 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:56.612 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:56.871 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:56.871 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 7c0c7fb5-e4da-446b-a354-b911db356654 00:08:56.871 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e7ce5cb1-e47d-4020-b808-cfc841b084f0 00:08:57.131 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:57.390 00:08:57.390 real 0m16.838s 00:08:57.390 user 0m43.645s 00:08:57.390 sys 0m3.676s 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:57.390 ************************************ 00:08:57.390 END TEST lvs_grow_dirty 00:08:57.390 ************************************ 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:57.390 nvmf_trace.0 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:08:57.390 05:09:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:08:57.390 rmmod nvme_tcp 00:08:57.390 rmmod nvme_fabrics 00:08:57.390 rmmod nvme_keyring 00:08:57.390 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:08:57.390 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:08:57.390 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:08:57.390 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 157485 ']' 00:08:57.390 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 157485 00:08:57.390 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 157485 ']' 00:08:57.390 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 157485 00:08:57.390 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:57.390 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:57.390 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 157485 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 157485' 00:08:57.650 killing process with pid 157485 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 157485 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 157485 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:57.650 05:09:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:00.190 00:09:00.190 real 0m41.606s 00:09:00.190 user 1m4.423s 00:09:00.190 sys 0m9.979s 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:00.190 ************************************ 00:09:00.190 END TEST nvmf_lvs_grow 00:09:00.190 ************************************ 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:00.190 ************************************ 00:09:00.190 START TEST nvmf_bdev_io_wait 00:09:00.190 ************************************ 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:00.190 * Looking for test storage... 00:09:00.190 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:00.190 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:00.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.191 --rc genhtml_branch_coverage=1 00:09:00.191 --rc genhtml_function_coverage=1 00:09:00.191 --rc genhtml_legend=1 00:09:00.191 --rc geninfo_all_blocks=1 00:09:00.191 --rc geninfo_unexecuted_blocks=1 00:09:00.191 00:09:00.191 ' 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:00.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.191 --rc genhtml_branch_coverage=1 00:09:00.191 --rc genhtml_function_coverage=1 00:09:00.191 --rc genhtml_legend=1 00:09:00.191 --rc geninfo_all_blocks=1 00:09:00.191 --rc geninfo_unexecuted_blocks=1 00:09:00.191 00:09:00.191 ' 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:00.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.191 --rc genhtml_branch_coverage=1 00:09:00.191 --rc genhtml_function_coverage=1 00:09:00.191 --rc genhtml_legend=1 00:09:00.191 --rc geninfo_all_blocks=1 00:09:00.191 --rc geninfo_unexecuted_blocks=1 00:09:00.191 00:09:00.191 ' 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:00.191 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:00.191 --rc genhtml_branch_coverage=1 00:09:00.191 --rc genhtml_function_coverage=1 00:09:00.191 --rc genhtml_legend=1 00:09:00.191 --rc geninfo_all_blocks=1 00:09:00.191 --rc geninfo_unexecuted_blocks=1 00:09:00.191 00:09:00.191 ' 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:00.191 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:09:00.191 05:09:13 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:06.773 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:06.773 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.773 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:06.774 Found net devices under 0000:af:00.0: cvl_0_0 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:06.774 Found net devices under 0000:af:00.1: cvl_0_1 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:06.774 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:06.774 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.370 ms 00:09:06.774 00:09:06.774 --- 10.0.0.2 ping statistics --- 00:09:06.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.774 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:06.774 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:06.774 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.190 ms 00:09:06.774 00:09:06.774 --- 10.0.0.1 ping statistics --- 00:09:06.774 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:06.774 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=161480 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 161480 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 161480 ']' 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:06.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.774 [2024-12-15 05:09:19.580653] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:06.774 [2024-12-15 05:09:19.580702] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:06.774 [2024-12-15 05:09:19.658270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:06.774 [2024-12-15 05:09:19.682280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:06.774 [2024-12-15 05:09:19.682317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:06.774 [2024-12-15 05:09:19.682323] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:06.774 [2024-12-15 05:09:19.682329] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:06.774 [2024-12-15 05:09:19.682350] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:06.774 [2024-12-15 05:09:19.683798] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:06.774 [2024-12-15 05:09:19.683909] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:09:06.774 [2024-12-15 05:09:19.684036] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.774 [2024-12-15 05:09:19.684037] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:06.774 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.775 [2024-12-15 05:09:19.856046] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.775 Malloc0 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:06.775 [2024-12-15 05:09:19.907276] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=161689 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=161692 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:06.775 { 00:09:06.775 "params": { 00:09:06.775 "name": "Nvme$subsystem", 00:09:06.775 "trtype": "$TEST_TRANSPORT", 00:09:06.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.775 "adrfam": "ipv4", 00:09:06.775 "trsvcid": "$NVMF_PORT", 00:09:06.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.775 "hdgst": ${hdgst:-false}, 00:09:06.775 "ddgst": ${ddgst:-false} 00:09:06.775 }, 00:09:06.775 "method": "bdev_nvme_attach_controller" 00:09:06.775 } 00:09:06.775 EOF 00:09:06.775 )") 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=161695 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:06.775 { 00:09:06.775 "params": { 00:09:06.775 "name": "Nvme$subsystem", 00:09:06.775 "trtype": "$TEST_TRANSPORT", 00:09:06.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.775 "adrfam": "ipv4", 00:09:06.775 "trsvcid": "$NVMF_PORT", 00:09:06.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.775 "hdgst": ${hdgst:-false}, 00:09:06.775 "ddgst": ${ddgst:-false} 00:09:06.775 }, 00:09:06.775 "method": "bdev_nvme_attach_controller" 00:09:06.775 } 00:09:06.775 EOF 00:09:06.775 )") 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=161699 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:06.775 { 00:09:06.775 "params": { 00:09:06.775 "name": "Nvme$subsystem", 00:09:06.775 "trtype": "$TEST_TRANSPORT", 00:09:06.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.775 "adrfam": "ipv4", 00:09:06.775 "trsvcid": "$NVMF_PORT", 00:09:06.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.775 "hdgst": ${hdgst:-false}, 00:09:06.775 "ddgst": ${ddgst:-false} 00:09:06.775 }, 00:09:06.775 "method": "bdev_nvme_attach_controller" 00:09:06.775 } 00:09:06.775 EOF 00:09:06.775 )") 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:06.775 { 00:09:06.775 "params": { 00:09:06.775 "name": "Nvme$subsystem", 00:09:06.775 "trtype": "$TEST_TRANSPORT", 00:09:06.775 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:06.775 "adrfam": "ipv4", 00:09:06.775 "trsvcid": "$NVMF_PORT", 00:09:06.775 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:06.775 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:06.775 "hdgst": ${hdgst:-false}, 00:09:06.775 "ddgst": ${ddgst:-false} 00:09:06.775 }, 00:09:06.775 "method": "bdev_nvme_attach_controller" 00:09:06.775 } 00:09:06.775 EOF 00:09:06.775 )") 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 161689 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:06.775 "params": { 00:09:06.775 "name": "Nvme1", 00:09:06.775 "trtype": "tcp", 00:09:06.775 "traddr": "10.0.0.2", 00:09:06.775 "adrfam": "ipv4", 00:09:06.775 "trsvcid": "4420", 00:09:06.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.775 "hdgst": false, 00:09:06.775 "ddgst": false 00:09:06.775 }, 00:09:06.775 "method": "bdev_nvme_attach_controller" 00:09:06.775 }' 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:06.775 "params": { 00:09:06.775 "name": "Nvme1", 00:09:06.775 "trtype": "tcp", 00:09:06.775 "traddr": "10.0.0.2", 00:09:06.775 "adrfam": "ipv4", 00:09:06.775 "trsvcid": "4420", 00:09:06.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.775 "hdgst": false, 00:09:06.775 "ddgst": false 00:09:06.775 }, 00:09:06.775 "method": "bdev_nvme_attach_controller" 00:09:06.775 }' 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:06.775 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:06.775 "params": { 00:09:06.775 "name": "Nvme1", 00:09:06.775 "trtype": "tcp", 00:09:06.775 "traddr": "10.0.0.2", 00:09:06.775 "adrfam": "ipv4", 00:09:06.775 "trsvcid": "4420", 00:09:06.775 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.775 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.776 "hdgst": false, 00:09:06.776 "ddgst": false 00:09:06.776 }, 00:09:06.776 "method": "bdev_nvme_attach_controller" 00:09:06.776 }' 00:09:06.776 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:09:06.776 05:09:19 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:06.776 "params": { 00:09:06.776 "name": "Nvme1", 00:09:06.776 "trtype": "tcp", 00:09:06.776 "traddr": "10.0.0.2", 00:09:06.776 "adrfam": "ipv4", 00:09:06.776 "trsvcid": "4420", 00:09:06.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:06.776 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:06.776 "hdgst": false, 00:09:06.776 "ddgst": false 00:09:06.776 }, 00:09:06.776 "method": "bdev_nvme_attach_controller" 00:09:06.776 }' 00:09:06.776 [2024-12-15 05:09:19.958095] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:06.776 [2024-12-15 05:09:19.958096] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:06.776 [2024-12-15 05:09:19.958149] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-12-15 05:09:19.958149] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:06.776 --proc-type=auto ] 00:09:06.776 [2024-12-15 05:09:19.960083] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:06.776 [2024-12-15 05:09:19.960126] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:06.776 [2024-12-15 05:09:19.963620] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:06.776 [2024-12-15 05:09:19.963658] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:06.776 [2024-12-15 05:09:20.142334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.776 [2024-12-15 05:09:20.159730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:09:06.776 [2024-12-15 05:09:20.234238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.776 [2024-12-15 05:09:20.251692] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:09:06.776 [2024-12-15 05:09:20.334732] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.776 [2024-12-15 05:09:20.352068] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:09:06.776 [2024-12-15 05:09:20.435361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:07.035 [2024-12-15 05:09:20.458369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:09:07.035 Running I/O for 1 seconds... 00:09:07.035 Running I/O for 1 seconds... 00:09:07.035 Running I/O for 1 seconds... 00:09:07.035 Running I/O for 1 seconds... 00:09:07.974 11200.00 IOPS, 43.75 MiB/s 00:09:07.974 Latency(us) 00:09:07.974 [2024-12-15T04:09:21.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.974 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:07.974 Nvme1n1 : 1.01 11245.38 43.93 0.00 0.00 11337.87 6459.98 17226.61 00:09:07.974 [2024-12-15T04:09:21.661Z] =================================================================================================================== 00:09:07.974 [2024-12-15T04:09:21.661Z] Total : 11245.38 43.93 0.00 0.00 11337.87 6459.98 17226.61 00:09:07.974 9901.00 IOPS, 38.68 MiB/s [2024-12-15T04:09:21.661Z] 243272.00 IOPS, 950.28 MiB/s 00:09:07.974 Latency(us) 00:09:07.974 [2024-12-15T04:09:21.661Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.974 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:07.975 Nvme1n1 : 1.00 242906.65 948.85 0.00 0.00 524.42 221.38 1490.16 00:09:07.975 [2024-12-15T04:09:21.662Z] =================================================================================================================== 00:09:07.975 [2024-12-15T04:09:21.662Z] Total : 242906.65 948.85 0.00 0.00 524.42 221.38 1490.16 00:09:07.975 00:09:07.975 Latency(us) 00:09:07.975 [2024-12-15T04:09:21.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.975 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:07.975 Nvme1n1 : 1.01 9970.13 38.95 0.00 0.00 12794.25 5149.26 20222.54 00:09:07.975 [2024-12-15T04:09:21.662Z] =================================================================================================================== 00:09:07.975 [2024-12-15T04:09:21.662Z] Total : 9970.13 38.95 0.00 0.00 12794.25 5149.26 20222.54 00:09:07.975 10699.00 IOPS, 41.79 MiB/s 00:09:07.975 Latency(us) 00:09:07.975 [2024-12-15T04:09:21.662Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:07.975 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:07.975 Nvme1n1 : 1.01 10788.81 42.14 0.00 0.00 11831.88 4119.41 22469.49 00:09:07.975 [2024-12-15T04:09:21.662Z] =================================================================================================================== 00:09:07.975 [2024-12-15T04:09:21.662Z] Total : 10788.81 42.14 0.00 0.00 11831.88 4119.41 22469.49 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 161692 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 161695 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 161699 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:08.235 rmmod nvme_tcp 00:09:08.235 rmmod nvme_fabrics 00:09:08.235 rmmod nvme_keyring 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 161480 ']' 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 161480 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 161480 ']' 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 161480 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 161480 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 161480' 00:09:08.235 killing process with pid 161480 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 161480 00:09:08.235 05:09:21 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 161480 00:09:08.495 05:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:08.495 05:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:08.495 05:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:08.495 05:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:09:08.495 05:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:09:08.495 05:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:08.495 05:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:09:08.495 05:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:08.495 05:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:08.495 05:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.495 05:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.495 05:09:22 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:11.035 00:09:11.035 real 0m10.735s 00:09:11.035 user 0m15.917s 00:09:11.035 sys 0m6.144s 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:11.035 ************************************ 00:09:11.035 END TEST nvmf_bdev_io_wait 00:09:11.035 ************************************ 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.035 ************************************ 00:09:11.035 START TEST nvmf_queue_depth 00:09:11.035 ************************************ 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:11.035 * Looking for test storage... 00:09:11.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.035 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:11.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.036 --rc genhtml_branch_coverage=1 00:09:11.036 --rc genhtml_function_coverage=1 00:09:11.036 --rc genhtml_legend=1 00:09:11.036 --rc geninfo_all_blocks=1 00:09:11.036 --rc geninfo_unexecuted_blocks=1 00:09:11.036 00:09:11.036 ' 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:11.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.036 --rc genhtml_branch_coverage=1 00:09:11.036 --rc genhtml_function_coverage=1 00:09:11.036 --rc genhtml_legend=1 00:09:11.036 --rc geninfo_all_blocks=1 00:09:11.036 --rc geninfo_unexecuted_blocks=1 00:09:11.036 00:09:11.036 ' 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:11.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.036 --rc genhtml_branch_coverage=1 00:09:11.036 --rc genhtml_function_coverage=1 00:09:11.036 --rc genhtml_legend=1 00:09:11.036 --rc geninfo_all_blocks=1 00:09:11.036 --rc geninfo_unexecuted_blocks=1 00:09:11.036 00:09:11.036 ' 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:11.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.036 --rc genhtml_branch_coverage=1 00:09:11.036 --rc genhtml_function_coverage=1 00:09:11.036 --rc genhtml_legend=1 00:09:11.036 --rc geninfo_all_blocks=1 00:09:11.036 --rc geninfo_unexecuted_blocks=1 00:09:11.036 00:09:11.036 ' 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:11.036 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:09:11.036 05:09:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.622 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:17.622 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:09:17.622 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:17.622 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:17.623 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:17.623 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:17.623 Found net devices under 0000:af:00.0: cvl_0_0 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:17.623 Found net devices under 0000:af:00.1: cvl_0_1 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:17.623 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:17.623 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:09:17.623 00:09:17.623 --- 10.0.0.2 ping statistics --- 00:09:17.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.623 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:17.623 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:17.623 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:09:17.623 00:09:17.623 --- 10.0.0.1 ping statistics --- 00:09:17.623 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:17.623 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=165452 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 165452 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 165452 ']' 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:17.623 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.623 [2024-12-15 05:09:30.506489] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:17.623 [2024-12-15 05:09:30.506535] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:17.623 [2024-12-15 05:09:30.587665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.623 [2024-12-15 05:09:30.608968] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:17.623 [2024-12-15 05:09:30.609005] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:17.623 [2024-12-15 05:09:30.609012] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:17.623 [2024-12-15 05:09:30.609021] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:17.623 [2024-12-15 05:09:30.609026] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:17.623 [2024-12-15 05:09:30.609492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:17.623 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.624 [2024-12-15 05:09:30.741088] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.624 Malloc0 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.624 [2024-12-15 05:09:30.791245] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=165578 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 165578 /var/tmp/bdevperf.sock 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 165578 ']' 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:17.624 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:17.624 05:09:30 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.624 [2024-12-15 05:09:30.839751] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:17.624 [2024-12-15 05:09:30.839790] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid165578 ] 00:09:17.624 [2024-12-15 05:09:30.910148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:17.624 [2024-12-15 05:09:30.941684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:17.624 05:09:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:17.624 05:09:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:09:17.624 05:09:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:17.624 05:09:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:17.624 05:09:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:17.624 NVMe0n1 00:09:17.624 05:09:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:17.624 05:09:31 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:17.624 Running I/O for 10 seconds... 00:09:19.938 12067.00 IOPS, 47.14 MiB/s [2024-12-15T04:09:34.561Z] 12192.50 IOPS, 47.63 MiB/s [2024-12-15T04:09:35.498Z] 12285.00 IOPS, 47.99 MiB/s [2024-12-15T04:09:36.434Z] 12280.00 IOPS, 47.97 MiB/s [2024-12-15T04:09:37.371Z] 12284.00 IOPS, 47.98 MiB/s [2024-12-15T04:09:38.307Z] 12373.33 IOPS, 48.33 MiB/s [2024-12-15T04:09:39.683Z] 12422.71 IOPS, 48.53 MiB/s [2024-12-15T04:09:40.615Z] 12436.75 IOPS, 48.58 MiB/s [2024-12-15T04:09:41.552Z] 12433.44 IOPS, 48.57 MiB/s [2024-12-15T04:09:41.552Z] 12469.50 IOPS, 48.71 MiB/s 00:09:27.865 Latency(us) 00:09:27.865 [2024-12-15T04:09:41.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.865 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:27.865 Verification LBA range: start 0x0 length 0x4000 00:09:27.865 NVMe0n1 : 10.06 12490.46 48.79 0.00 0.00 81734.74 18974.23 53926.77 00:09:27.865 [2024-12-15T04:09:41.552Z] =================================================================================================================== 00:09:27.865 [2024-12-15T04:09:41.552Z] Total : 12490.46 48.79 0.00 0.00 81734.74 18974.23 53926.77 00:09:27.865 { 00:09:27.865 "results": [ 00:09:27.865 { 00:09:27.865 "job": "NVMe0n1", 00:09:27.865 "core_mask": "0x1", 00:09:27.865 "workload": "verify", 00:09:27.865 "status": "finished", 00:09:27.865 "verify_range": { 00:09:27.865 "start": 0, 00:09:27.865 "length": 16384 00:09:27.865 }, 00:09:27.865 "queue_depth": 1024, 00:09:27.865 "io_size": 4096, 00:09:27.865 "runtime": 10.062719, 00:09:27.865 "iops": 12490.461077170097, 00:09:27.865 "mibps": 48.79086358269569, 00:09:27.865 "io_failed": 0, 00:09:27.865 "io_timeout": 0, 00:09:27.865 "avg_latency_us": 81734.73815749354, 00:09:27.865 "min_latency_us": 18974.23238095238, 00:09:27.865 "max_latency_us": 53926.76571428571 00:09:27.865 } 00:09:27.865 ], 00:09:27.865 "core_count": 1 00:09:27.865 } 00:09:27.865 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 165578 00:09:27.865 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 165578 ']' 00:09:27.865 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 165578 00:09:27.865 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:27.865 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:27.865 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165578 00:09:27.865 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:27.865 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:27.865 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165578' 00:09:27.865 killing process with pid 165578 00:09:27.865 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 165578 00:09:27.865 Received shutdown signal, test time was about 10.000000 seconds 00:09:27.865 00:09:27.865 Latency(us) 00:09:27.865 [2024-12-15T04:09:41.552Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:27.865 [2024-12-15T04:09:41.552Z] =================================================================================================================== 00:09:27.865 [2024-12-15T04:09:41.552Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:27.865 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 165578 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:28.125 rmmod nvme_tcp 00:09:28.125 rmmod nvme_fabrics 00:09:28.125 rmmod nvme_keyring 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 165452 ']' 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 165452 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 165452 ']' 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 165452 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 165452 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 165452' 00:09:28.125 killing process with pid 165452 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 165452 00:09:28.125 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 165452 00:09:28.385 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:28.385 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:28.385 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:28.385 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:09:28.385 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:09:28.385 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:28.385 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:09:28.385 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:28.385 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:28.385 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:28.385 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:28.385 05:09:41 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.293 05:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:30.293 00:09:30.293 real 0m19.780s 00:09:30.293 user 0m23.201s 00:09:30.293 sys 0m5.949s 00:09:30.293 05:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:30.293 05:09:43 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:30.293 ************************************ 00:09:30.293 END TEST nvmf_queue_depth 00:09:30.293 ************************************ 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:30.553 ************************************ 00:09:30.553 START TEST nvmf_target_multipath 00:09:30.553 ************************************ 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:30.553 * Looking for test storage... 00:09:30.553 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:30.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.553 --rc genhtml_branch_coverage=1 00:09:30.553 --rc genhtml_function_coverage=1 00:09:30.553 --rc genhtml_legend=1 00:09:30.553 --rc geninfo_all_blocks=1 00:09:30.553 --rc geninfo_unexecuted_blocks=1 00:09:30.553 00:09:30.553 ' 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:30.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.553 --rc genhtml_branch_coverage=1 00:09:30.553 --rc genhtml_function_coverage=1 00:09:30.553 --rc genhtml_legend=1 00:09:30.553 --rc geninfo_all_blocks=1 00:09:30.553 --rc geninfo_unexecuted_blocks=1 00:09:30.553 00:09:30.553 ' 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:30.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.553 --rc genhtml_branch_coverage=1 00:09:30.553 --rc genhtml_function_coverage=1 00:09:30.553 --rc genhtml_legend=1 00:09:30.553 --rc geninfo_all_blocks=1 00:09:30.553 --rc geninfo_unexecuted_blocks=1 00:09:30.553 00:09:30.553 ' 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:30.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:30.553 --rc genhtml_branch_coverage=1 00:09:30.553 --rc genhtml_function_coverage=1 00:09:30.553 --rc genhtml_legend=1 00:09:30.553 --rc geninfo_all_blocks=1 00:09:30.553 --rc geninfo_unexecuted_blocks=1 00:09:30.553 00:09:30.553 ' 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:30.553 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:30.554 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:30.554 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:30.554 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:30.554 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:30.814 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:09:30.814 05:09:44 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:37.388 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:37.388 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:37.389 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:37.389 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:37.389 Found net devices under 0000:af:00.0: cvl_0_0 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:37.389 Found net devices under 0000:af:00.1: cvl_0_1 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:37.389 05:09:49 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:37.389 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:37.389 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:37.389 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:37.389 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:37.389 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:37.390 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:37.390 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.256 ms 00:09:37.390 00:09:37.390 --- 10.0.0.2 ping statistics --- 00:09:37.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.390 rtt min/avg/max/mdev = 0.256/0.256/0.256/0.000 ms 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:37.390 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:37.390 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:09:37.390 00:09:37.390 --- 10.0.0.1 ping statistics --- 00:09:37.390 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:37.390 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:37.390 only one NIC for nvmf test 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:37.390 rmmod nvme_tcp 00:09:37.390 rmmod nvme_fabrics 00:09:37.390 rmmod nvme_keyring 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.390 05:09:50 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:09:38.771 00:09:38.771 real 0m8.368s 00:09:38.771 user 0m1.815s 00:09:38.771 sys 0m4.510s 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:38.771 ************************************ 00:09:38.771 END TEST nvmf_target_multipath 00:09:38.771 ************************************ 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.771 05:09:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.031 ************************************ 00:09:39.031 START TEST nvmf_zcopy 00:09:39.031 ************************************ 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:39.031 * Looking for test storage... 00:09:39.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:39.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.031 --rc genhtml_branch_coverage=1 00:09:39.031 --rc genhtml_function_coverage=1 00:09:39.031 --rc genhtml_legend=1 00:09:39.031 --rc geninfo_all_blocks=1 00:09:39.031 --rc geninfo_unexecuted_blocks=1 00:09:39.031 00:09:39.031 ' 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:39.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.031 --rc genhtml_branch_coverage=1 00:09:39.031 --rc genhtml_function_coverage=1 00:09:39.031 --rc genhtml_legend=1 00:09:39.031 --rc geninfo_all_blocks=1 00:09:39.031 --rc geninfo_unexecuted_blocks=1 00:09:39.031 00:09:39.031 ' 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:39.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.031 --rc genhtml_branch_coverage=1 00:09:39.031 --rc genhtml_function_coverage=1 00:09:39.031 --rc genhtml_legend=1 00:09:39.031 --rc geninfo_all_blocks=1 00:09:39.031 --rc geninfo_unexecuted_blocks=1 00:09:39.031 00:09:39.031 ' 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:39.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:39.031 --rc genhtml_branch_coverage=1 00:09:39.031 --rc genhtml_function_coverage=1 00:09:39.031 --rc genhtml_legend=1 00:09:39.031 --rc geninfo_all_blocks=1 00:09:39.031 --rc geninfo_unexecuted_blocks=1 00:09:39.031 00:09:39.031 ' 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.031 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:09:39.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:09:39.032 05:09:52 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.610 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:09:45.611 Found 0000:af:00.0 (0x8086 - 0x159b) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:09:45.611 Found 0000:af:00.1 (0x8086 - 0x159b) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:09:45.611 Found net devices under 0000:af:00.0: cvl_0_0 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:09:45.611 Found net devices under 0000:af:00.1: cvl_0_1 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:09:45.611 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.611 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:09:45.611 00:09:45.611 --- 10.0.0.2 ping statistics --- 00:09:45.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.611 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.611 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.611 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:09:45.611 00:09:45.611 --- 10.0.0.1 ping statistics --- 00:09:45.611 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.611 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=174406 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 174406 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 174406 ']' 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:45.611 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:45.611 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.612 [2024-12-15 05:09:58.685008] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:45.612 [2024-12-15 05:09:58.685057] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:45.612 [2024-12-15 05:09:58.762178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.612 [2024-12-15 05:09:58.783431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:45.612 [2024-12-15 05:09:58.783468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:45.612 [2024-12-15 05:09:58.783474] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:45.612 [2024-12-15 05:09:58.783480] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:45.612 [2024-12-15 05:09:58.783485] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:45.612 [2024-12-15 05:09:58.783935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.612 [2024-12-15 05:09:58.918725] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.612 [2024-12-15 05:09:58.938893] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.612 malloc0 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:45.612 { 00:09:45.612 "params": { 00:09:45.612 "name": "Nvme$subsystem", 00:09:45.612 "trtype": "$TEST_TRANSPORT", 00:09:45.612 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:45.612 "adrfam": "ipv4", 00:09:45.612 "trsvcid": "$NVMF_PORT", 00:09:45.612 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:45.612 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:45.612 "hdgst": ${hdgst:-false}, 00:09:45.612 "ddgst": ${ddgst:-false} 00:09:45.612 }, 00:09:45.612 "method": "bdev_nvme_attach_controller" 00:09:45.612 } 00:09:45.612 EOF 00:09:45.612 )") 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:45.612 05:09:58 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:45.612 "params": { 00:09:45.612 "name": "Nvme1", 00:09:45.612 "trtype": "tcp", 00:09:45.612 "traddr": "10.0.0.2", 00:09:45.612 "adrfam": "ipv4", 00:09:45.612 "trsvcid": "4420", 00:09:45.612 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:45.612 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:45.612 "hdgst": false, 00:09:45.612 "ddgst": false 00:09:45.612 }, 00:09:45.612 "method": "bdev_nvme_attach_controller" 00:09:45.612 }' 00:09:45.612 [2024-12-15 05:09:59.021126] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:45.612 [2024-12-15 05:09:59.021169] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid174435 ] 00:09:45.612 [2024-12-15 05:09:59.091923] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:45.612 [2024-12-15 05:09:59.114247] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:45.871 Running I/O for 10 seconds... 00:09:47.743 8552.00 IOPS, 66.81 MiB/s [2024-12-15T04:10:02.366Z] 8625.50 IOPS, 67.39 MiB/s [2024-12-15T04:10:03.746Z] 8696.33 IOPS, 67.94 MiB/s [2024-12-15T04:10:04.683Z] 8720.75 IOPS, 68.13 MiB/s [2024-12-15T04:10:05.620Z] 8736.80 IOPS, 68.26 MiB/s [2024-12-15T04:10:06.558Z] 8747.67 IOPS, 68.34 MiB/s [2024-12-15T04:10:07.495Z] 8761.71 IOPS, 68.45 MiB/s [2024-12-15T04:10:08.432Z] 8767.88 IOPS, 68.50 MiB/s [2024-12-15T04:10:09.810Z] 8773.33 IOPS, 68.54 MiB/s [2024-12-15T04:10:09.810Z] 8779.90 IOPS, 68.59 MiB/s 00:09:56.123 Latency(us) 00:09:56.123 [2024-12-15T04:10:09.810Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:56.123 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:56.123 Verification LBA range: start 0x0 length 0x1000 00:09:56.123 Nvme1n1 : 10.01 8783.45 68.62 0.00 0.00 14531.27 2340.57 22594.32 00:09:56.123 [2024-12-15T04:10:09.810Z] =================================================================================================================== 00:09:56.123 [2024-12-15T04:10:09.810Z] Total : 8783.45 68.62 0.00 0.00 14531.27 2340.57 22594.32 00:09:56.123 05:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=176217 00:09:56.123 05:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:09:56.123 05:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:56.123 05:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:56.123 05:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:09:56.123 05:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:09:56.123 05:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:09:56.123 05:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:09:56.123 05:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:09:56.123 { 00:09:56.123 "params": { 00:09:56.123 "name": "Nvme$subsystem", 00:09:56.123 "trtype": "$TEST_TRANSPORT", 00:09:56.123 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:56.123 "adrfam": "ipv4", 00:09:56.123 "trsvcid": "$NVMF_PORT", 00:09:56.123 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:56.123 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:56.123 "hdgst": ${hdgst:-false}, 00:09:56.123 "ddgst": ${ddgst:-false} 00:09:56.123 }, 00:09:56.123 "method": "bdev_nvme_attach_controller" 00:09:56.123 } 00:09:56.123 EOF 00:09:56.123 )") 00:09:56.124 [2024-12-15 05:10:09.544861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.544893] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 05:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:09:56.124 05:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:09:56.124 05:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:09:56.124 05:10:09 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:09:56.124 "params": { 00:09:56.124 "name": "Nvme1", 00:09:56.124 "trtype": "tcp", 00:09:56.124 "traddr": "10.0.0.2", 00:09:56.124 "adrfam": "ipv4", 00:09:56.124 "trsvcid": "4420", 00:09:56.124 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:56.124 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:56.124 "hdgst": false, 00:09:56.124 "ddgst": false 00:09:56.124 }, 00:09:56.124 "method": "bdev_nvme_attach_controller" 00:09:56.124 }' 00:09:56.124 [2024-12-15 05:10:09.556860] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.556873] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.568890] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.568901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.580919] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.580929] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.583821] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:09:56.124 [2024-12-15 05:10:09.583862] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid176217 ] 00:09:56.124 [2024-12-15 05:10:09.592952] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.592964] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.604984] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.604998] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.617020] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.617031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.629053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.629064] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.641079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.641089] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.653111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.653121] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.656205] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:56.124 [2024-12-15 05:10:09.665146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.665164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.677179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.677194] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.678446] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.124 [2024-12-15 05:10:09.689219] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.689234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.701250] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.701268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.713284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.713303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.725304] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.725316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.737344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.737357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.749373] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.749385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.761414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.761434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.773440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.773455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.785472] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.785487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.797503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.797514] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.124 [2024-12-15 05:10:09.809537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.124 [2024-12-15 05:10:09.809547] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.383 [2024-12-15 05:10:09.821571] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.383 [2024-12-15 05:10:09.821583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.383 [2024-12-15 05:10:09.833605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.383 [2024-12-15 05:10:09.833619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.383 [2024-12-15 05:10:09.845639] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:09.845653] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.384 [2024-12-15 05:10:09.898261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:09.898280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.384 Running I/O for 5 seconds... 00:09:56.384 [2024-12-15 05:10:09.909815] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:09.909828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.384 [2024-12-15 05:10:09.925411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:09.925431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.384 [2024-12-15 05:10:09.938708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:09.938729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.384 [2024-12-15 05:10:09.952750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:09.952770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.384 [2024-12-15 05:10:09.966601] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:09.966621] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.384 [2024-12-15 05:10:09.980159] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:09.980186] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.384 [2024-12-15 05:10:09.993901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:09.993926] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.384 [2024-12-15 05:10:10.007801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:10.007820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.384 [2024-12-15 05:10:10.022364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:10.022385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.384 [2024-12-15 05:10:10.036313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:10.036338] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.384 [2024-12-15 05:10:10.051279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:10.051300] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.384 [2024-12-15 05:10:10.064999] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.384 [2024-12-15 05:10:10.065018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.078623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.078644] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.092897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.092917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.106638] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.106658] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.120612] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.120632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.134415] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.134434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.147923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.147943] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.161548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.161567] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.175651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.175671] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.189777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.189802] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.203655] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.203674] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.217588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.217608] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.231338] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.231358] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.245076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.245097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.258509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.258530] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.272383] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.272405] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.286410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.286430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.300244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.300264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.314425] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.314445] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.643 [2024-12-15 05:10:10.328156] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.643 [2024-12-15 05:10:10.328176] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.902 [2024-12-15 05:10:10.341608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.902 [2024-12-15 05:10:10.341629] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.902 [2024-12-15 05:10:10.355491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.902 [2024-12-15 05:10:10.355511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.902 [2024-12-15 05:10:10.369076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.902 [2024-12-15 05:10:10.369096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.902 [2024-12-15 05:10:10.382869] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.902 [2024-12-15 05:10:10.382889] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.902 [2024-12-15 05:10:10.396491] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.902 [2024-12-15 05:10:10.396512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.902 [2024-12-15 05:10:10.405607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.902 [2024-12-15 05:10:10.405627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.902 [2024-12-15 05:10:10.419825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.902 [2024-12-15 05:10:10.419845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.902 [2024-12-15 05:10:10.433671] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.902 [2024-12-15 05:10:10.433692] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.902 [2024-12-15 05:10:10.444723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.902 [2024-12-15 05:10:10.444752] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.902 [2024-12-15 05:10:10.459110] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.902 [2024-12-15 05:10:10.459129] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.902 [2024-12-15 05:10:10.472958] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.902 [2024-12-15 05:10:10.472977] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.902 [2024-12-15 05:10:10.486786] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.902 [2024-12-15 05:10:10.486806] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.903 [2024-12-15 05:10:10.500603] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.903 [2024-12-15 05:10:10.500622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.903 [2024-12-15 05:10:10.514694] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.903 [2024-12-15 05:10:10.514713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.903 [2024-12-15 05:10:10.528660] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.903 [2024-12-15 05:10:10.528679] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.903 [2024-12-15 05:10:10.542749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.903 [2024-12-15 05:10:10.542768] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.903 [2024-12-15 05:10:10.556824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.903 [2024-12-15 05:10:10.556844] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.903 [2024-12-15 05:10:10.569968] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.903 [2024-12-15 05:10:10.569988] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.903 [2024-12-15 05:10:10.583982] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.903 [2024-12-15 05:10:10.584009] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.598040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.598060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.611832] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.611852] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.625707] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.625727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.639460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.639480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.652959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.652980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.666762] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.666781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.680812] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.680831] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.691711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.691730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.705855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.705882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.720230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.720260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.733518] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.733537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.747312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.747331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.760774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.760794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.774437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.774456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.787808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.787828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.801287] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.801307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.814914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.814933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.828970] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.828989] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.162 [2024-12-15 05:10:10.842411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.162 [2024-12-15 05:10:10.842431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:10.856283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:10.856303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:10.869947] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:10.869967] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:10.883939] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:10.883959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:10.897288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:10.897307] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:10.910966] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:10.910986] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 16790.00 IOPS, 131.17 MiB/s [2024-12-15T04:10:11.109Z] [2024-12-15 05:10:10.925217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:10.925236] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:10.936353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:10.936372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:10.950206] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:10.950224] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:10.963858] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:10.963878] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:10.977599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:10.977620] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:10.991476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:10.991495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:11.005293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:11.005312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:11.018889] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:11.018909] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:11.032467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:11.032487] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:11.046058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:11.046078] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:11.059727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.422 [2024-12-15 05:10:11.059747] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.422 [2024-12-15 05:10:11.073356] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 05:10:11.073375] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 05:10:11.087253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 05:10:11.087271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.423 [2024-12-15 05:10:11.100743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.423 [2024-12-15 05:10:11.100763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.114761] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.114781] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.128269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.128288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.142398] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.142417] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.156285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.156303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.170193] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.170211] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.181362] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.181382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.195914] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.195933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.209863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.209882] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.223617] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.223637] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.237616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.237635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.251549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.251568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.265410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.265429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.279182] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.279201] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.292776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.292795] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.306467] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.306486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.320108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.320127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.333810] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.333829] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.347923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.347942] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.682 [2024-12-15 05:10:11.361885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.682 [2024-12-15 05:10:11.361906] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.376031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.376050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.389727] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.389746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.403627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.403646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.416979] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.417004] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.430862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.430881] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.444565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.444584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.458357] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.458377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.472111] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.472130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.485599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.485618] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.499452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.499472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.512710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.512729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.526582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.526601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.540138] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.540157] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.554029] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.554048] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.567819] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.567838] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.581096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.581115] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.594801] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.594820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.608536] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.608557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.942 [2024-12-15 05:10:11.622289] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.942 [2024-12-15 05:10:11.622309] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.202 [2024-12-15 05:10:11.636054] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.202 [2024-12-15 05:10:11.636076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.202 [2024-12-15 05:10:11.649757] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.202 [2024-12-15 05:10:11.649778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.202 [2024-12-15 05:10:11.663623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.202 [2024-12-15 05:10:11.663643] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.202 [2024-12-15 05:10:11.677352] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.202 [2024-12-15 05:10:11.677372] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.202 [2024-12-15 05:10:11.690948] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.202 [2024-12-15 05:10:11.690968] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.202 [2024-12-15 05:10:11.704564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.202 [2024-12-15 05:10:11.704584] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.202 [2024-12-15 05:10:11.718341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.202 [2024-12-15 05:10:11.718362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.202 [2024-12-15 05:10:11.731897] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.202 [2024-12-15 05:10:11.731917] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.202 [2024-12-15 05:10:11.745932] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.202 [2024-12-15 05:10:11.745952] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.202 [2024-12-15 05:10:11.759799] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.202 [2024-12-15 05:10:11.759819] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.202 [2024-12-15 05:10:11.773430] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.202 [2024-12-15 05:10:11.773450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.202 [2024-12-15 05:10:11.787245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.202 [2024-12-15 05:10:11.787280] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.202 [2024-12-15 05:10:11.800866] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.202 [2024-12-15 05:10:11.800887] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.203 [2024-12-15 05:10:11.814443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.203 [2024-12-15 05:10:11.814464] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.203 [2024-12-15 05:10:11.828269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.203 [2024-12-15 05:10:11.828290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.203 [2024-12-15 05:10:11.841902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.203 [2024-12-15 05:10:11.841922] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.203 [2024-12-15 05:10:11.856089] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.203 [2024-12-15 05:10:11.856109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.203 [2024-12-15 05:10:11.869606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.203 [2024-12-15 05:10:11.869628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.203 [2024-12-15 05:10:11.883370] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.203 [2024-12-15 05:10:11.883390] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 [2024-12-15 05:10:11.897345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.462 [2024-12-15 05:10:11.897365] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 [2024-12-15 05:10:11.911040] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.462 [2024-12-15 05:10:11.911060] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 16912.00 IOPS, 132.12 MiB/s [2024-12-15T04:10:12.149Z] [2024-12-15 05:10:11.924795] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.462 [2024-12-15 05:10:11.924815] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 [2024-12-15 05:10:11.938743] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.462 [2024-12-15 05:10:11.938763] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 [2024-12-15 05:10:11.952468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.462 [2024-12-15 05:10:11.952491] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 [2024-12-15 05:10:11.966406] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.462 [2024-12-15 05:10:11.966427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 [2024-12-15 05:10:11.980401] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.462 [2024-12-15 05:10:11.980422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 [2024-12-15 05:10:11.994233] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.462 [2024-12-15 05:10:11.994259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 [2024-12-15 05:10:12.007915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.462 [2024-12-15 05:10:12.007934] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 [2024-12-15 05:10:12.022013] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.462 [2024-12-15 05:10:12.022032] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 [2024-12-15 05:10:12.036107] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.462 [2024-12-15 05:10:12.036126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 [2024-12-15 05:10:12.049774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.462 [2024-12-15 05:10:12.049794] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 [2024-12-15 05:10:12.063212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.462 [2024-12-15 05:10:12.063231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.462 [2024-12-15 05:10:12.077255] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.463 [2024-12-15 05:10:12.077275] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.463 [2024-12-15 05:10:12.091050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.463 [2024-12-15 05:10:12.091069] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.463 [2024-12-15 05:10:12.104802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.463 [2024-12-15 05:10:12.104820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.463 [2024-12-15 05:10:12.118605] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.463 [2024-12-15 05:10:12.118624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.463 [2024-12-15 05:10:12.132097] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.463 [2024-12-15 05:10:12.132116] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.463 [2024-12-15 05:10:12.145802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.463 [2024-12-15 05:10:12.145821] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.159404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.159423] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.173418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.173438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.187035] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.187054] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.201190] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.201209] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.214917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.214936] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.228729] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.228748] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.242792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.242811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.256266] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.256290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.270006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.270025] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.283502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.283521] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.296953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.296973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.310740] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.310760] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.324577] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.324597] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.338608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.338627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.352476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.352496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.366335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.366354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.380093] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.380112] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.393983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.394006] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.722 [2024-12-15 05:10:12.407658] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.722 [2024-12-15 05:10:12.407677] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.421230] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.421249] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.434851] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.434870] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.448651] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.448670] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.462530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.462549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.475930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.475949] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.489567] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.489586] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.503530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.503550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.517306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.517329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.531066] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.531085] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.544643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.544663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.558185] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.558205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.571773] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.571792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.585450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.585470] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.598730] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.598750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.612324] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.612344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.626162] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.982 [2024-12-15 05:10:12.626185] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.982 [2024-12-15 05:10:12.639855] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.983 [2024-12-15 05:10:12.639875] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.983 [2024-12-15 05:10:12.653388] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.983 [2024-12-15 05:10:12.653407] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.983 [2024-12-15 05:10:12.667125] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.983 [2024-12-15 05:10:12.667144] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.680935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.680954] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.694882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.694902] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.708633] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.708655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.722477] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.722496] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.736408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.736427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.749726] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.749744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.763606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.763625] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.777443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.777462] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.791351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.791370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.805360] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.805380] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.819320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.819339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.833499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.833518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.847152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.847171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.861271] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.861290] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.874723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.874743] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.888496] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.888515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.902053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.902073] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.242 [2024-12-15 05:10:12.916072] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.242 [2024-12-15 05:10:12.916091] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 16939.33 IOPS, 132.34 MiB/s [2024-12-15T04:10:13.189Z] [2024-12-15 05:10:12.929745] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:12.929765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:12.943213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:12.943231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:12.956901] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:12.956920] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:12.970363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:12.970384] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:12.984553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:12.984572] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:12.994911] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:12.994932] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.008807] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.008828] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.022540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.022561] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.036145] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.036164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.050251] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.050271] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.064595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.064616] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.075418] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.075438] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.089632] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.089652] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.103161] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.103181] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.117465] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.117484] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.130937] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.130956] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.144637] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.144656] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.158313] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.158333] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.172662] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.172685] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.502 [2024-12-15 05:10:13.184133] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.502 [2024-12-15 05:10:13.184153] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.198173] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.198195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.211917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.211937] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.225556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.225575] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.239129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.239149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.253260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.253279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.266776] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.266796] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.280372] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.280395] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.293881] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.293901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.308015] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.308051] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.318520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.318539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.332682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.332701] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.346098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.346117] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.360315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.360334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.373885] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.373904] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.387249] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.387268] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.401246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.401265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.414755] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.414774] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.428549] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.428568] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:59.762 [2024-12-15 05:10:13.442309] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:59.762 [2024-12-15 05:10:13.442327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.456345] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.456364] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.470428] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.470448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.484240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.484259] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.497708] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.497727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.511448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.511467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.525147] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.525166] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.539214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.539237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.553098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.553118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.566684] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.566703] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.580414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.580433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.594128] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.594147] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.608363] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.608382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.621925] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.621944] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.636006] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.636028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.649647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.649665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.663530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.663549] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.677254] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.677273] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.690882] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.690900] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.022 [2024-12-15 05:10:13.704953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.022 [2024-12-15 05:10:13.704973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.281 [2024-12-15 05:10:13.719082] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.719101] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.732728] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.732746] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.746301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.746321] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.760037] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.760056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.773641] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.773662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.787547] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.787566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.801517] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.801540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.815556] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.815576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.829240] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.829260] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.842682] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.842702] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.856615] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.856634] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.870414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.870433] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.884380] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.884399] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.898055] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.898074] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.911838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.911858] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 16966.25 IOPS, 132.55 MiB/s [2024-12-15T04:10:13.969Z] [2024-12-15 05:10:13.925647] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.925666] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.939316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.939336] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.953117] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.953137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.282 [2024-12-15 05:10:13.967172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.282 [2024-12-15 05:10:13.967193] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.541 [2024-12-15 05:10:13.980975] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.541 [2024-12-15 05:10:13.981001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.541 [2024-12-15 05:10:13.994576] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.541 [2024-12-15 05:10:13.994596] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.541 [2024-12-15 05:10:14.008440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.541 [2024-12-15 05:10:14.008460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.541 [2024-12-15 05:10:14.022058] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.541 [2024-12-15 05:10:14.022077] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.541 [2024-12-15 05:10:14.035724] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.541 [2024-12-15 05:10:14.035744] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.541 [2024-12-15 05:10:14.049582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.541 [2024-12-15 05:10:14.049603] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.541 [2024-12-15 05:10:14.063300] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.541 [2024-12-15 05:10:14.063324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.541 [2024-12-15 05:10:14.077275] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.541 [2024-12-15 05:10:14.077295] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.541 [2024-12-15 05:10:14.090894] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.542 [2024-12-15 05:10:14.090913] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.542 [2024-12-15 05:10:14.104396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.542 [2024-12-15 05:10:14.104415] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.542 [2024-12-15 05:10:14.118009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.542 [2024-12-15 05:10:14.118029] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.542 [2024-12-15 05:10:14.131997] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.542 [2024-12-15 05:10:14.132031] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.542 [2024-12-15 05:10:14.145942] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.542 [2024-12-15 05:10:14.145961] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.542 [2024-12-15 05:10:14.159857] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.542 [2024-12-15 05:10:14.159876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.542 [2024-12-15 05:10:14.173751] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.542 [2024-12-15 05:10:14.173771] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.542 [2024-12-15 05:10:14.187262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.542 [2024-12-15 05:10:14.187282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.542 [2024-12-15 05:10:14.201414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.542 [2024-12-15 05:10:14.201434] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.542 [2024-12-15 05:10:14.214791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.542 [2024-12-15 05:10:14.214810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.228509] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.228529] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.242243] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.242263] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.255791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.255811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.269074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.269093] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.282621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.282641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.296581] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.296601] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.310057] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.310076] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.323402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.323421] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.337411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.337432] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.351129] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.351148] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.365298] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.365318] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.376202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.376222] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.390296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.390317] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.403513] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.403533] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.417017] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.417036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.430760] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.430780] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.801 [2024-12-15 05:10:14.444018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.801 [2024-12-15 05:10:14.444037] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.802 [2024-12-15 05:10:14.458144] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.802 [2024-12-15 05:10:14.458164] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.802 [2024-12-15 05:10:14.471679] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.802 [2024-12-15 05:10:14.471698] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:00.802 [2024-12-15 05:10:14.485502] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:00.802 [2024-12-15 05:10:14.485522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.499438] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.499458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.513613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.513632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.524792] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.524811] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.538834] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.538854] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.552460] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.552480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.566648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.566668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.580262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.580282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.594158] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.594178] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.607753] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.607773] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.621804] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.621824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.635480] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.635500] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.649419] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.649439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.663177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.663196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.676951] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.676972] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.690692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.690710] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.704211] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.704229] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.718295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.718314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.732328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.732347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.061 [2024-12-15 05:10:14.746354] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.061 [2024-12-15 05:10:14.746373] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.321 [2024-12-15 05:10:14.760284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.321 [2024-12-15 05:10:14.760304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.774284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.774303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.787434] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.787452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.801478] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.801498] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.815141] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.815161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.829172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.829191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.842977] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.843001] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.856705] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.856724] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.870703] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.870726] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.884621] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.884640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.898099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.898118] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.912067] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.912087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 16965.80 IOPS, 132.55 MiB/s [2024-12-15T04:10:15.009Z] [2024-12-15 05:10:14.925369] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.925387] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 00:10:01.322 Latency(us) 00:10:01.322 [2024-12-15T04:10:15.009Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:01.322 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:01.322 Nvme1n1 : 5.01 16968.92 132.57 0.00 0.00 7536.03 3292.40 17850.76 00:10:01.322 [2024-12-15T04:10:15.009Z] =================================================================================================================== 00:10:01.322 [2024-12-15T04:10:15.009Z] Total : 16968.92 132.57 0.00 0.00 7536.03 3292.40 17850.76 00:10:01.322 [2024-12-15 05:10:14.934449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.934467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.946461] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.946475] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.958508] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.958526] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.970530] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.970546] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.982561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.982576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:14.994587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:14.994602] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.322 [2024-12-15 05:10:15.006619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.322 [2024-12-15 05:10:15.006636] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.582 [2024-12-15 05:10:15.018648] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.582 [2024-12-15 05:10:15.018663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.582 [2024-12-15 05:10:15.030681] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.582 [2024-12-15 05:10:15.030704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.582 [2024-12-15 05:10:15.042711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.582 [2024-12-15 05:10:15.042722] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.582 [2024-12-15 05:10:15.054747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.582 [2024-12-15 05:10:15.054761] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.582 [2024-12-15 05:10:15.066777] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.582 [2024-12-15 05:10:15.066790] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.582 [2024-12-15 05:10:15.078808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:01.582 [2024-12-15 05:10:15.078820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:01.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (176217) - No such process 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 176217 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.582 delay0 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:01.582 05:10:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:01.841 [2024-12-15 05:10:15.270219] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:08.411 [2024-12-15 05:10:21.456350] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeebc0 is same with the state(6) to be set 00:10:08.411 [2024-12-15 05:10:21.456383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfeebc0 is same with the state(6) to be set 00:10:08.411 Initializing NVMe Controllers 00:10:08.411 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:08.411 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:08.411 Initialization complete. Launching workers. 00:10:08.411 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 155 00:10:08.411 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 431, failed to submit 44 00:10:08.411 success 248, unsuccessful 183, failed 0 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:08.411 rmmod nvme_tcp 00:10:08.411 rmmod nvme_fabrics 00:10:08.411 rmmod nvme_keyring 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 174406 ']' 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 174406 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 174406 ']' 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 174406 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 174406 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 174406' 00:10:08.411 killing process with pid 174406 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 174406 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 174406 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:08.411 05:10:21 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.319 05:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:10.319 00:10:10.319 real 0m31.332s 00:10:10.319 user 0m43.226s 00:10:10.319 sys 0m9.778s 00:10:10.319 05:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.319 05:10:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:10.319 ************************************ 00:10:10.319 END TEST nvmf_zcopy 00:10:10.319 ************************************ 00:10:10.319 05:10:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:10.319 05:10:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:10.319 05:10:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.319 05:10:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:10.319 ************************************ 00:10:10.319 START TEST nvmf_nmic 00:10:10.319 ************************************ 00:10:10.319 05:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:10.319 * Looking for test storage... 00:10:10.319 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:10.319 05:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:10.319 05:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:10:10.319 05:10:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.580 --rc genhtml_branch_coverage=1 00:10:10.580 --rc genhtml_function_coverage=1 00:10:10.580 --rc genhtml_legend=1 00:10:10.580 --rc geninfo_all_blocks=1 00:10:10.580 --rc geninfo_unexecuted_blocks=1 00:10:10.580 00:10:10.580 ' 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.580 --rc genhtml_branch_coverage=1 00:10:10.580 --rc genhtml_function_coverage=1 00:10:10.580 --rc genhtml_legend=1 00:10:10.580 --rc geninfo_all_blocks=1 00:10:10.580 --rc geninfo_unexecuted_blocks=1 00:10:10.580 00:10:10.580 ' 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.580 --rc genhtml_branch_coverage=1 00:10:10.580 --rc genhtml_function_coverage=1 00:10:10.580 --rc genhtml_legend=1 00:10:10.580 --rc geninfo_all_blocks=1 00:10:10.580 --rc geninfo_unexecuted_blocks=1 00:10:10.580 00:10:10.580 ' 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:10.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:10.580 --rc genhtml_branch_coverage=1 00:10:10.580 --rc genhtml_function_coverage=1 00:10:10.580 --rc genhtml_legend=1 00:10:10.580 --rc geninfo_all_blocks=1 00:10:10.580 --rc geninfo_unexecuted_blocks=1 00:10:10.580 00:10:10.580 ' 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:10.580 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:10.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:10:10.581 05:10:24 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:17.162 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:17.162 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:17.162 Found net devices under 0000:af:00.0: cvl_0_0 00:10:17.162 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:17.163 Found net devices under 0000:af:00.1: cvl_0_1 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:17.163 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:17.163 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.320 ms 00:10:17.163 00:10:17.163 --- 10.0.0.2 ping statistics --- 00:10:17.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.163 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:10:17.163 05:10:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:17.163 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:17.163 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:10:17.163 00:10:17.163 --- 10.0.0.1 ping statistics --- 00:10:17.163 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:17.163 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=181698 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 181698 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 181698 ']' 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:17.163 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.163 [2024-12-15 05:10:30.107656] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:17.163 [2024-12-15 05:10:30.107697] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:17.163 [2024-12-15 05:10:30.183614] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:17.163 [2024-12-15 05:10:30.207204] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:17.163 [2024-12-15 05:10:30.207241] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:17.163 [2024-12-15 05:10:30.207248] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:17.163 [2024-12-15 05:10:30.207254] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:17.163 [2024-12-15 05:10:30.207259] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:17.163 [2024-12-15 05:10:30.208560] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:17.163 [2024-12-15 05:10:30.208669] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:17.163 [2024-12-15 05:10:30.208764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:17.163 [2024-12-15 05:10:30.208764] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.163 [2024-12-15 05:10:30.349101] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.163 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.163 Malloc0 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.164 [2024-12-15 05:10:30.409507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:17.164 test case1: single bdev can't be used in multiple subsystems 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.164 [2024-12-15 05:10:30.437405] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:17.164 [2024-12-15 05:10:30.437424] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:17.164 [2024-12-15 05:10:30.437431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:17.164 request: 00:10:17.164 { 00:10:17.164 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:17.164 "namespace": { 00:10:17.164 "bdev_name": "Malloc0", 00:10:17.164 "no_auto_visible": false, 00:10:17.164 "hide_metadata": false 00:10:17.164 }, 00:10:17.164 "method": "nvmf_subsystem_add_ns", 00:10:17.164 "req_id": 1 00:10:17.164 } 00:10:17.164 Got JSON-RPC error response 00:10:17.164 response: 00:10:17.164 { 00:10:17.164 "code": -32602, 00:10:17.164 "message": "Invalid parameters" 00:10:17.164 } 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:17.164 Adding namespace failed - expected result. 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:17.164 test case2: host connect to nvmf target in multiple paths 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:17.164 [2024-12-15 05:10:30.449554] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:17.164 05:10:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:18.102 05:10:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:19.479 05:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:19.479 05:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:10:19.479 05:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:19.479 05:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:19.479 05:10:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:10:21.384 05:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:21.384 05:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:21.384 05:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:21.384 05:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:21.384 05:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:21.384 05:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:10:21.384 05:10:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:21.384 [global] 00:10:21.384 thread=1 00:10:21.384 invalidate=1 00:10:21.384 rw=write 00:10:21.384 time_based=1 00:10:21.384 runtime=1 00:10:21.384 ioengine=libaio 00:10:21.384 direct=1 00:10:21.384 bs=4096 00:10:21.384 iodepth=1 00:10:21.384 norandommap=0 00:10:21.384 numjobs=1 00:10:21.384 00:10:21.384 verify_dump=1 00:10:21.384 verify_backlog=512 00:10:21.384 verify_state_save=0 00:10:21.384 do_verify=1 00:10:21.384 verify=crc32c-intel 00:10:21.384 [job0] 00:10:21.384 filename=/dev/nvme0n1 00:10:21.384 Could not set queue depth (nvme0n1) 00:10:21.641 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:21.641 fio-3.35 00:10:21.641 Starting 1 thread 00:10:23.035 00:10:23.035 job0: (groupid=0, jobs=1): err= 0: pid=182729: Sun Dec 15 05:10:36 2024 00:10:23.035 read: IOPS=2836, BW=11.1MiB/s (11.6MB/s)(11.1MiB/1001msec) 00:10:23.035 slat (nsec): min=4061, max=37076, avg=6700.76, stdev=1654.00 00:10:23.035 clat (usec): min=155, max=412, avg=184.02, stdev=16.93 00:10:23.035 lat (usec): min=160, max=422, avg=190.72, stdev=16.98 00:10:23.035 clat percentiles (usec): 00:10:23.035 | 1.00th=[ 165], 5.00th=[ 169], 10.00th=[ 172], 20.00th=[ 176], 00:10:23.035 | 30.00th=[ 178], 40.00th=[ 180], 50.00th=[ 182], 60.00th=[ 182], 00:10:23.035 | 70.00th=[ 186], 80.00th=[ 188], 90.00th=[ 200], 95.00th=[ 217], 00:10:23.035 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 367], 99.95th=[ 379], 00:10:23.035 | 99.99th=[ 412] 00:10:23.035 write: IOPS=3068, BW=12.0MiB/s (12.6MB/s)(12.0MiB/1001msec); 0 zone resets 00:10:23.035 slat (nsec): min=4317, max=49471, avg=9261.44, stdev=2650.65 00:10:23.035 clat (usec): min=108, max=397, avg=136.04, stdev=21.43 00:10:23.035 lat (usec): min=115, max=446, avg=145.31, stdev=21.07 00:10:23.035 clat percentiles (usec): 00:10:23.035 | 1.00th=[ 117], 5.00th=[ 121], 10.00th=[ 122], 20.00th=[ 124], 00:10:23.035 | 30.00th=[ 126], 40.00th=[ 127], 50.00th=[ 129], 60.00th=[ 131], 00:10:23.035 | 70.00th=[ 135], 80.00th=[ 141], 90.00th=[ 172], 95.00th=[ 180], 00:10:23.035 | 99.00th=[ 200], 99.50th=[ 253], 99.90th=[ 269], 99.95th=[ 338], 00:10:23.035 | 99.99th=[ 396] 00:10:23.035 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:10:23.035 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:23.035 lat (usec) : 250=99.04%, 500=0.96% 00:10:23.035 cpu : usr=2.30%, sys=5.00%, ctx=5911, majf=0, minf=1 00:10:23.035 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:23.035 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.035 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:23.035 issued rwts: total=2839,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:23.035 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:23.035 00:10:23.035 Run status group 0 (all jobs): 00:10:23.035 READ: bw=11.1MiB/s (11.6MB/s), 11.1MiB/s-11.1MiB/s (11.6MB/s-11.6MB/s), io=11.1MiB (11.6MB), run=1001-1001msec 00:10:23.035 WRITE: bw=12.0MiB/s (12.6MB/s), 12.0MiB/s-12.0MiB/s (12.6MB/s-12.6MB/s), io=12.0MiB (12.6MB), run=1001-1001msec 00:10:23.035 00:10:23.035 Disk stats (read/write): 00:10:23.035 nvme0n1: ios=2610/2732, merge=0/0, ticks=478/365, in_queue=843, util=91.08% 00:10:23.035 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:23.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:23.035 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:23.036 rmmod nvme_tcp 00:10:23.036 rmmod nvme_fabrics 00:10:23.036 rmmod nvme_keyring 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 181698 ']' 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 181698 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 181698 ']' 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 181698 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:23.036 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 181698 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 181698' 00:10:23.296 killing process with pid 181698 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 181698 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 181698 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:23.296 05:10:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.835 05:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:25.835 00:10:25.835 real 0m15.110s 00:10:25.835 user 0m33.895s 00:10:25.835 sys 0m5.588s 00:10:25.835 05:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:25.835 05:10:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.835 ************************************ 00:10:25.835 END TEST nvmf_nmic 00:10:25.835 ************************************ 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:25.835 ************************************ 00:10:25.835 START TEST nvmf_fio_target 00:10:25.835 ************************************ 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:25.835 * Looking for test storage... 00:10:25.835 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:25.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.835 --rc genhtml_branch_coverage=1 00:10:25.835 --rc genhtml_function_coverage=1 00:10:25.835 --rc genhtml_legend=1 00:10:25.835 --rc geninfo_all_blocks=1 00:10:25.835 --rc geninfo_unexecuted_blocks=1 00:10:25.835 00:10:25.835 ' 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:25.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.835 --rc genhtml_branch_coverage=1 00:10:25.835 --rc genhtml_function_coverage=1 00:10:25.835 --rc genhtml_legend=1 00:10:25.835 --rc geninfo_all_blocks=1 00:10:25.835 --rc geninfo_unexecuted_blocks=1 00:10:25.835 00:10:25.835 ' 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:25.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.835 --rc genhtml_branch_coverage=1 00:10:25.835 --rc genhtml_function_coverage=1 00:10:25.835 --rc genhtml_legend=1 00:10:25.835 --rc geninfo_all_blocks=1 00:10:25.835 --rc geninfo_unexecuted_blocks=1 00:10:25.835 00:10:25.835 ' 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:25.835 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:25.835 --rc genhtml_branch_coverage=1 00:10:25.835 --rc genhtml_function_coverage=1 00:10:25.835 --rc genhtml_legend=1 00:10:25.835 --rc geninfo_all_blocks=1 00:10:25.835 --rc geninfo_unexecuted_blocks=1 00:10:25.835 00:10:25.835 ' 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:25.835 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:25.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:10:25.836 05:10:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:32.412 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.412 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:32.413 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:32.413 Found net devices under 0000:af:00.0: cvl_0_0 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:32.413 Found net devices under 0000:af:00.1: cvl_0_1 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:32.413 05:10:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:32.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:32.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.303 ms 00:10:32.413 00:10:32.413 --- 10.0.0.2 ping statistics --- 00:10:32.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.413 rtt min/avg/max/mdev = 0.303/0.303/0.303/0.000 ms 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:32.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:32.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.172 ms 00:10:32.413 00:10:32.413 --- 10.0.0.1 ping statistics --- 00:10:32.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:32.413 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=186462 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 186462 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 186462 ']' 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:32.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:32.413 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.413 [2024-12-15 05:10:45.395241] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:32.413 [2024-12-15 05:10:45.395292] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:32.413 [2024-12-15 05:10:45.475340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:32.413 [2024-12-15 05:10:45.498961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:32.413 [2024-12-15 05:10:45.499001] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:32.413 [2024-12-15 05:10:45.499010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:32.413 [2024-12-15 05:10:45.499016] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:32.413 [2024-12-15 05:10:45.499022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:32.413 [2024-12-15 05:10:45.500348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:32.414 [2024-12-15 05:10:45.500460] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:32.414 [2024-12-15 05:10:45.500543] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:32.414 [2024-12-15 05:10:45.500544] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:32.414 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:32.414 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:10:32.414 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:32.414 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:32.414 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:32.414 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:32.414 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:32.414 [2024-12-15 05:10:45.797437] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:32.414 05:10:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.414 05:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:32.414 05:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.673 05:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:32.673 05:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:32.932 05:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:32.932 05:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.192 05:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:33.192 05:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:33.451 05:10:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.451 05:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:33.451 05:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.710 05:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:33.710 05:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:33.969 05:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:33.969 05:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:34.229 05:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:34.229 05:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:34.229 05:10:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:34.488 05:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:34.488 05:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:34.747 05:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.006 [2024-12-15 05:10:48.467664] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.006 05:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:35.006 05:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:35.264 05:10:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:36.642 05:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:36.642 05:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:10:36.642 05:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.642 05:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:10:36.642 05:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:10:36.642 05:10:49 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:10:38.550 05:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:38.550 05:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:38.550 05:10:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:38.550 05:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:10:38.550 05:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:38.550 05:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:10:38.550 05:10:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:38.550 [global] 00:10:38.550 thread=1 00:10:38.550 invalidate=1 00:10:38.550 rw=write 00:10:38.550 time_based=1 00:10:38.550 runtime=1 00:10:38.550 ioengine=libaio 00:10:38.550 direct=1 00:10:38.550 bs=4096 00:10:38.550 iodepth=1 00:10:38.550 norandommap=0 00:10:38.550 numjobs=1 00:10:38.550 00:10:38.550 verify_dump=1 00:10:38.550 verify_backlog=512 00:10:38.550 verify_state_save=0 00:10:38.550 do_verify=1 00:10:38.550 verify=crc32c-intel 00:10:38.550 [job0] 00:10:38.550 filename=/dev/nvme0n1 00:10:38.550 [job1] 00:10:38.550 filename=/dev/nvme0n2 00:10:38.550 [job2] 00:10:38.550 filename=/dev/nvme0n3 00:10:38.550 [job3] 00:10:38.550 filename=/dev/nvme0n4 00:10:38.550 Could not set queue depth (nvme0n1) 00:10:38.550 Could not set queue depth (nvme0n2) 00:10:38.550 Could not set queue depth (nvme0n3) 00:10:38.550 Could not set queue depth (nvme0n4) 00:10:38.809 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.809 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.809 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.809 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:38.809 fio-3.35 00:10:38.809 Starting 4 threads 00:10:40.220 00:10:40.220 job0: (groupid=0, jobs=1): err= 0: pid=187784: Sun Dec 15 05:10:53 2024 00:10:40.220 read: IOPS=2453, BW=9814KiB/s (10.0MB/s)(9824KiB/1001msec) 00:10:40.220 slat (nsec): min=7047, max=28858, avg=8035.49, stdev=1361.53 00:10:40.220 clat (usec): min=168, max=523, avg=214.85, stdev=21.17 00:10:40.220 lat (usec): min=175, max=531, avg=222.89, stdev=21.32 00:10:40.220 clat percentiles (usec): 00:10:40.220 | 1.00th=[ 180], 5.00th=[ 188], 10.00th=[ 194], 20.00th=[ 198], 00:10:40.220 | 30.00th=[ 204], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:10:40.220 | 70.00th=[ 223], 80.00th=[ 229], 90.00th=[ 245], 95.00th=[ 253], 00:10:40.220 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 334], 99.95th=[ 367], 00:10:40.220 | 99.99th=[ 523] 00:10:40.220 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:40.220 slat (nsec): min=10325, max=51840, avg=11793.06, stdev=1832.98 00:10:40.220 clat (usec): min=118, max=2733, avg=159.01, stdev=53.61 00:10:40.220 lat (usec): min=129, max=2745, avg=170.80, stdev=53.71 00:10:40.220 clat percentiles (usec): 00:10:40.220 | 1.00th=[ 129], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 145], 00:10:40.220 | 30.00th=[ 149], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:10:40.220 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 188], 00:10:40.220 | 99.00th=[ 206], 99.50th=[ 217], 99.90th=[ 297], 99.95th=[ 408], 00:10:40.220 | 99.99th=[ 2737] 00:10:40.220 bw ( KiB/s): min=12288, max=12288, per=51.70%, avg=12288.00, stdev= 0.00, samples=1 00:10:40.220 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:10:40.220 lat (usec) : 250=96.53%, 500=3.43%, 750=0.02% 00:10:40.220 lat (msec) : 4=0.02% 00:10:40.220 cpu : usr=4.70%, sys=7.10%, ctx=5018, majf=0, minf=1 00:10:40.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.220 issued rwts: total=2456,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.220 job1: (groupid=0, jobs=1): err= 0: pid=187785: Sun Dec 15 05:10:53 2024 00:10:40.220 read: IOPS=2242, BW=8971KiB/s (9186kB/s)(8980KiB/1001msec) 00:10:40.220 slat (nsec): min=7895, max=42203, avg=9222.20, stdev=1691.68 00:10:40.220 clat (usec): min=177, max=701, avg=233.84, stdev=23.09 00:10:40.220 lat (usec): min=186, max=709, avg=243.06, stdev=23.18 00:10:40.220 clat percentiles (usec): 00:10:40.220 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 219], 00:10:40.220 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:10:40.220 | 70.00th=[ 243], 80.00th=[ 249], 90.00th=[ 255], 95.00th=[ 262], 00:10:40.220 | 99.00th=[ 277], 99.50th=[ 302], 99.90th=[ 515], 99.95th=[ 611], 00:10:40.220 | 99.99th=[ 701] 00:10:40.220 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:40.220 slat (nsec): min=10906, max=41660, avg=13006.23, stdev=1897.68 00:10:40.220 clat (usec): min=123, max=369, avg=158.50, stdev=15.06 00:10:40.220 lat (usec): min=136, max=385, avg=171.50, stdev=15.61 00:10:40.220 clat percentiles (usec): 00:10:40.220 | 1.00th=[ 135], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 147], 00:10:40.220 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:10:40.220 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 178], 95.00th=[ 184], 00:10:40.220 | 99.00th=[ 204], 99.50th=[ 212], 99.90th=[ 269], 99.95th=[ 285], 00:10:40.220 | 99.99th=[ 371] 00:10:40.220 bw ( KiB/s): min=11624, max=11624, per=48.91%, avg=11624.00, stdev= 0.00, samples=1 00:10:40.220 iops : min= 2906, max= 2906, avg=2906.00, stdev= 0.00, samples=1 00:10:40.220 lat (usec) : 250=92.26%, 500=7.68%, 750=0.06% 00:10:40.220 cpu : usr=5.00%, sys=7.70%, ctx=4807, majf=0, minf=1 00:10:40.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.220 issued rwts: total=2245,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.220 job2: (groupid=0, jobs=1): err= 0: pid=187786: Sun Dec 15 05:10:53 2024 00:10:40.220 read: IOPS=22, BW=89.0KiB/s (91.1kB/s)(92.0KiB/1034msec) 00:10:40.220 slat (nsec): min=10915, max=23878, avg=22649.43, stdev=2578.06 00:10:40.220 clat (usec): min=40870, max=41963, avg=41029.24, stdev=224.68 00:10:40.220 lat (usec): min=40893, max=41987, avg=41051.89, stdev=223.91 00:10:40.220 clat percentiles (usec): 00:10:40.220 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:40.220 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:40.220 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:40.220 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:40.220 | 99.99th=[42206] 00:10:40.220 write: IOPS=495, BW=1981KiB/s (2028kB/s)(2048KiB/1034msec); 0 zone resets 00:10:40.220 slat (nsec): min=9890, max=45510, avg=12422.50, stdev=2108.35 00:10:40.220 clat (usec): min=127, max=260, avg=160.20, stdev=12.86 00:10:40.220 lat (usec): min=139, max=305, avg=172.62, stdev=13.62 00:10:40.220 clat percentiles (usec): 00:10:40.220 | 1.00th=[ 139], 5.00th=[ 143], 10.00th=[ 145], 20.00th=[ 151], 00:10:40.220 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 159], 60.00th=[ 163], 00:10:40.220 | 70.00th=[ 165], 80.00th=[ 172], 90.00th=[ 176], 95.00th=[ 184], 00:10:40.220 | 99.00th=[ 194], 99.50th=[ 196], 99.90th=[ 262], 99.95th=[ 262], 00:10:40.220 | 99.99th=[ 262] 00:10:40.220 bw ( KiB/s): min= 4096, max= 4096, per=17.23%, avg=4096.00, stdev= 0.00, samples=1 00:10:40.220 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:40.220 lat (usec) : 250=95.51%, 500=0.19% 00:10:40.220 lat (msec) : 50=4.30% 00:10:40.220 cpu : usr=0.19%, sys=0.77%, ctx=535, majf=0, minf=2 00:10:40.220 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.220 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.220 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.220 job3: (groupid=0, jobs=1): err= 0: pid=187787: Sun Dec 15 05:10:53 2024 00:10:40.220 read: IOPS=21, BW=87.6KiB/s (89.8kB/s)(88.0KiB/1004msec) 00:10:40.220 slat (nsec): min=9993, max=22879, avg=21662.59, stdev=2614.57 00:10:40.220 clat (usec): min=40871, max=41051, avg=40967.57, stdev=46.03 00:10:40.220 lat (usec): min=40893, max=41073, avg=40989.24, stdev=46.82 00:10:40.220 clat percentiles (usec): 00:10:40.220 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:10:40.220 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:10:40.220 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:10:40.220 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:40.220 | 99.99th=[41157] 00:10:40.220 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:10:40.220 slat (nsec): min=10806, max=39251, avg=13224.07, stdev=2523.44 00:10:40.220 clat (usec): min=127, max=279, avg=183.19, stdev=16.47 00:10:40.220 lat (usec): min=138, max=318, avg=196.42, stdev=17.29 00:10:40.220 clat percentiles (usec): 00:10:40.220 | 1.00th=[ 145], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 174], 00:10:40.220 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 184], 60.00th=[ 188], 00:10:40.220 | 70.00th=[ 192], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 208], 00:10:40.220 | 99.00th=[ 219], 99.50th=[ 221], 99.90th=[ 281], 99.95th=[ 281], 00:10:40.220 | 99.99th=[ 281] 00:10:40.220 bw ( KiB/s): min= 4096, max= 4096, per=17.23%, avg=4096.00, stdev= 0.00, samples=1 00:10:40.220 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:40.220 lat (usec) : 250=95.69%, 500=0.19% 00:10:40.220 lat (msec) : 50=4.12% 00:10:40.220 cpu : usr=0.20%, sys=1.20%, ctx=535, majf=0, minf=1 00:10:40.221 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:40.221 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.221 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:40.221 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:40.221 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:40.221 00:10:40.221 Run status group 0 (all jobs): 00:10:40.221 READ: bw=17.9MiB/s (18.8MB/s), 87.6KiB/s-9814KiB/s (89.8kB/s-10.0MB/s), io=18.5MiB (19.4MB), run=1001-1034msec 00:10:40.221 WRITE: bw=23.2MiB/s (24.3MB/s), 1981KiB/s-9.99MiB/s (2028kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1034msec 00:10:40.221 00:10:40.221 Disk stats (read/write): 00:10:40.221 nvme0n1: ios=2072/2194, merge=0/0, ticks=1273/339, in_queue=1612, util=85.57% 00:10:40.221 nvme0n2: ios=2051/2048, merge=0/0, ticks=1072/292, in_queue=1364, util=89.61% 00:10:40.221 nvme0n3: ios=75/512, merge=0/0, ticks=802/79, in_queue=881, util=94.57% 00:10:40.221 nvme0n4: ios=41/512, merge=0/0, ticks=1641/90, in_queue=1731, util=94.32% 00:10:40.221 05:10:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:40.221 [global] 00:10:40.221 thread=1 00:10:40.221 invalidate=1 00:10:40.221 rw=randwrite 00:10:40.221 time_based=1 00:10:40.221 runtime=1 00:10:40.221 ioengine=libaio 00:10:40.221 direct=1 00:10:40.221 bs=4096 00:10:40.221 iodepth=1 00:10:40.221 norandommap=0 00:10:40.221 numjobs=1 00:10:40.221 00:10:40.221 verify_dump=1 00:10:40.221 verify_backlog=512 00:10:40.221 verify_state_save=0 00:10:40.221 do_verify=1 00:10:40.221 verify=crc32c-intel 00:10:40.221 [job0] 00:10:40.221 filename=/dev/nvme0n1 00:10:40.221 [job1] 00:10:40.221 filename=/dev/nvme0n2 00:10:40.221 [job2] 00:10:40.221 filename=/dev/nvme0n3 00:10:40.221 [job3] 00:10:40.221 filename=/dev/nvme0n4 00:10:40.221 Could not set queue depth (nvme0n1) 00:10:40.221 Could not set queue depth (nvme0n2) 00:10:40.221 Could not set queue depth (nvme0n3) 00:10:40.221 Could not set queue depth (nvme0n4) 00:10:40.479 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.479 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.479 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.479 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:40.479 fio-3.35 00:10:40.479 Starting 4 threads 00:10:41.853 00:10:41.853 job0: (groupid=0, jobs=1): err= 0: pid=188150: Sun Dec 15 05:10:55 2024 00:10:41.853 read: IOPS=2087, BW=8352KiB/s (8552kB/s)(8360KiB/1001msec) 00:10:41.853 slat (nsec): min=6304, max=27903, avg=7228.36, stdev=1228.44 00:10:41.853 clat (usec): min=181, max=572, avg=261.76, stdev=63.49 00:10:41.853 lat (usec): min=188, max=580, avg=268.98, stdev=63.71 00:10:41.853 clat percentiles (usec): 00:10:41.853 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 223], 00:10:41.853 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 249], 00:10:41.853 | 70.00th=[ 262], 80.00th=[ 277], 90.00th=[ 334], 95.00th=[ 429], 00:10:41.853 | 99.00th=[ 494], 99.50th=[ 506], 99.90th=[ 545], 99.95th=[ 553], 00:10:41.853 | 99.99th=[ 570] 00:10:41.853 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:10:41.853 slat (nsec): min=8761, max=51144, avg=9961.42, stdev=1562.40 00:10:41.853 clat (usec): min=113, max=931, avg=157.06, stdev=30.01 00:10:41.853 lat (usec): min=123, max=941, avg=167.02, stdev=30.31 00:10:41.853 clat percentiles (usec): 00:10:41.853 | 1.00th=[ 124], 5.00th=[ 131], 10.00th=[ 135], 20.00th=[ 139], 00:10:41.853 | 30.00th=[ 145], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 159], 00:10:41.853 | 70.00th=[ 163], 80.00th=[ 172], 90.00th=[ 184], 95.00th=[ 192], 00:10:41.853 | 99.00th=[ 233], 99.50th=[ 258], 99.90th=[ 627], 99.95th=[ 635], 00:10:41.853 | 99.99th=[ 930] 00:10:41.853 bw ( KiB/s): min= 8248, max= 8248, per=26.48%, avg=8248.00, stdev= 0.00, samples=1 00:10:41.853 iops : min= 2062, max= 2062, avg=2062.00, stdev= 0.00, samples=1 00:10:41.853 lat (usec) : 250=81.78%, 500=17.76%, 750=0.43%, 1000=0.02% 00:10:41.853 cpu : usr=2.70%, sys=3.90%, ctx=4650, majf=0, minf=1 00:10:41.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.853 issued rwts: total=2090,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.853 job1: (groupid=0, jobs=1): err= 0: pid=188153: Sun Dec 15 05:10:55 2024 00:10:41.853 read: IOPS=1227, BW=4910KiB/s (5028kB/s)(4920KiB/1002msec) 00:10:41.853 slat (nsec): min=7502, max=20928, avg=8521.47, stdev=1139.65 00:10:41.853 clat (usec): min=189, max=41122, avg=561.95, stdev=3472.56 00:10:41.853 lat (usec): min=198, max=41133, avg=570.47, stdev=3472.74 00:10:41.853 clat percentiles (usec): 00:10:41.853 | 1.00th=[ 202], 5.00th=[ 215], 10.00th=[ 225], 20.00th=[ 235], 00:10:41.853 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 260], 00:10:41.853 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 310], 95.00th=[ 400], 00:10:41.853 | 99.00th=[ 490], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:10:41.853 | 99.99th=[41157] 00:10:41.853 write: IOPS=1532, BW=6132KiB/s (6279kB/s)(6144KiB/1002msec); 0 zone resets 00:10:41.853 slat (nsec): min=7278, max=72839, avg=12012.35, stdev=2491.31 00:10:41.853 clat (usec): min=117, max=370, avg=178.19, stdev=28.25 00:10:41.853 lat (usec): min=133, max=402, avg=190.20, stdev=28.55 00:10:41.853 clat percentiles (usec): 00:10:41.853 | 1.00th=[ 135], 5.00th=[ 143], 10.00th=[ 149], 20.00th=[ 155], 00:10:41.853 | 30.00th=[ 161], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 182], 00:10:41.853 | 70.00th=[ 188], 80.00th=[ 196], 90.00th=[ 208], 95.00th=[ 231], 00:10:41.853 | 99.00th=[ 277], 99.50th=[ 289], 99.90th=[ 351], 99.95th=[ 371], 00:10:41.853 | 99.99th=[ 371] 00:10:41.853 bw ( KiB/s): min= 3728, max= 8560, per=19.73%, avg=6144.00, stdev=3416.74, samples=2 00:10:41.853 iops : min= 932, max= 2140, avg=1536.00, stdev=854.18, samples=2 00:10:41.853 lat (usec) : 250=72.56%, 500=27.08%, 750=0.04% 00:10:41.853 lat (msec) : 50=0.33% 00:10:41.853 cpu : usr=1.70%, sys=2.90%, ctx=2767, majf=0, minf=1 00:10:41.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.853 issued rwts: total=1230,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.853 job2: (groupid=0, jobs=1): err= 0: pid=188154: Sun Dec 15 05:10:55 2024 00:10:41.853 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:10:41.853 slat (nsec): min=8455, max=27613, avg=9700.60, stdev=1417.97 00:10:41.853 clat (usec): min=192, max=474, avg=256.41, stdev=44.63 00:10:41.853 lat (usec): min=202, max=484, avg=266.12, stdev=44.72 00:10:41.853 clat percentiles (usec): 00:10:41.853 | 1.00th=[ 204], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 225], 00:10:41.853 | 30.00th=[ 229], 40.00th=[ 235], 50.00th=[ 241], 60.00th=[ 249], 00:10:41.853 | 70.00th=[ 265], 80.00th=[ 289], 90.00th=[ 330], 95.00th=[ 355], 00:10:41.853 | 99.00th=[ 383], 99.50th=[ 408], 99.90th=[ 453], 99.95th=[ 469], 00:10:41.853 | 99.99th=[ 474] 00:10:41.853 write: IOPS=2362, BW=9451KiB/s (9677kB/s)(9460KiB/1001msec); 0 zone resets 00:10:41.853 slat (nsec): min=11672, max=95863, avg=13059.68, stdev=2326.40 00:10:41.853 clat (usec): min=132, max=591, avg=173.37, stdev=21.64 00:10:41.853 lat (usec): min=144, max=603, avg=186.43, stdev=21.95 00:10:41.853 clat percentiles (usec): 00:10:41.853 | 1.00th=[ 143], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:10:41.853 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 176], 00:10:41.853 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 206], 00:10:41.853 | 99.00th=[ 243], 99.50th=[ 269], 99.90th=[ 375], 99.95th=[ 400], 00:10:41.853 | 99.99th=[ 594] 00:10:41.853 bw ( KiB/s): min= 8192, max= 8192, per=26.30%, avg=8192.00, stdev= 0.00, samples=1 00:10:41.853 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:10:41.853 lat (usec) : 250=81.62%, 500=18.35%, 750=0.02% 00:10:41.853 cpu : usr=3.10%, sys=8.80%, ctx=4414, majf=0, minf=1 00:10:41.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.853 issued rwts: total=2048,2365,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.853 job3: (groupid=0, jobs=1): err= 0: pid=188155: Sun Dec 15 05:10:55 2024 00:10:41.853 read: IOPS=1018, BW=4074KiB/s (4172kB/s)(4184KiB/1027msec) 00:10:41.853 slat (nsec): min=6989, max=32563, avg=9904.18, stdev=4066.15 00:10:41.853 clat (usec): min=181, max=40983, avg=703.20, stdev=4318.28 00:10:41.853 lat (usec): min=189, max=40992, avg=713.11, stdev=4318.19 00:10:41.853 clat percentiles (usec): 00:10:41.853 | 1.00th=[ 186], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 204], 00:10:41.853 | 30.00th=[ 210], 40.00th=[ 219], 50.00th=[ 229], 60.00th=[ 255], 00:10:41.853 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:10:41.853 | 99.00th=[40633], 99.50th=[40633], 99.90th=[41157], 99.95th=[41157], 00:10:41.853 | 99.99th=[41157] 00:10:41.853 write: IOPS=1495, BW=5982KiB/s (6126kB/s)(6144KiB/1027msec); 0 zone resets 00:10:41.853 slat (nsec): min=9582, max=71066, avg=12368.68, stdev=4093.57 00:10:41.853 clat (usec): min=113, max=347, avg=165.75, stdev=24.23 00:10:41.853 lat (usec): min=133, max=418, avg=178.12, stdev=25.00 00:10:41.853 clat percentiles (usec): 00:10:41.853 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 145], 20.00th=[ 149], 00:10:41.853 | 30.00th=[ 153], 40.00th=[ 157], 50.00th=[ 161], 60.00th=[ 165], 00:10:41.853 | 70.00th=[ 172], 80.00th=[ 178], 90.00th=[ 188], 95.00th=[ 212], 00:10:41.853 | 99.00th=[ 260], 99.50th=[ 273], 99.90th=[ 338], 99.95th=[ 347], 00:10:41.853 | 99.99th=[ 347] 00:10:41.853 bw ( KiB/s): min= 1544, max=10744, per=19.73%, avg=6144.00, stdev=6505.38, samples=2 00:10:41.853 iops : min= 386, max= 2686, avg=1536.00, stdev=1626.35, samples=2 00:10:41.853 lat (usec) : 250=81.80%, 500=17.70%, 750=0.04% 00:10:41.853 lat (msec) : 50=0.46% 00:10:41.853 cpu : usr=0.78%, sys=3.22%, ctx=2583, majf=0, minf=1 00:10:41.853 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:41.853 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.853 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:41.853 issued rwts: total=1046,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:41.853 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:41.853 00:10:41.854 Run status group 0 (all jobs): 00:10:41.854 READ: bw=24.4MiB/s (25.6MB/s), 4074KiB/s-8352KiB/s (4172kB/s-8552kB/s), io=25.1MiB (26.3MB), run=1001-1027msec 00:10:41.854 WRITE: bw=30.4MiB/s (31.9MB/s), 5982KiB/s-9.99MiB/s (6126kB/s-10.5MB/s), io=31.2MiB (32.8MB), run=1001-1027msec 00:10:41.854 00:10:41.854 Disk stats (read/write): 00:10:41.854 nvme0n1: ios=1654/2048, merge=0/0, ticks=447/328, in_queue=775, util=81.96% 00:10:41.854 nvme0n2: ios=1258/1536, merge=0/0, ticks=787/265, in_queue=1052, util=96.81% 00:10:41.854 nvme0n3: ios=1572/1977, merge=0/0, ticks=527/327, in_queue=854, util=96.20% 00:10:41.854 nvme0n4: ios=1092/1536, merge=0/0, ticks=895/242, in_queue=1137, util=97.35% 00:10:41.854 05:10:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:41.854 [global] 00:10:41.854 thread=1 00:10:41.854 invalidate=1 00:10:41.854 rw=write 00:10:41.854 time_based=1 00:10:41.854 runtime=1 00:10:41.854 ioengine=libaio 00:10:41.854 direct=1 00:10:41.854 bs=4096 00:10:41.854 iodepth=128 00:10:41.854 norandommap=0 00:10:41.854 numjobs=1 00:10:41.854 00:10:41.854 verify_dump=1 00:10:41.854 verify_backlog=512 00:10:41.854 verify_state_save=0 00:10:41.854 do_verify=1 00:10:41.854 verify=crc32c-intel 00:10:41.854 [job0] 00:10:41.854 filename=/dev/nvme0n1 00:10:41.854 [job1] 00:10:41.854 filename=/dev/nvme0n2 00:10:41.854 [job2] 00:10:41.854 filename=/dev/nvme0n3 00:10:41.854 [job3] 00:10:41.854 filename=/dev/nvme0n4 00:10:41.854 Could not set queue depth (nvme0n1) 00:10:41.854 Could not set queue depth (nvme0n2) 00:10:41.854 Could not set queue depth (nvme0n3) 00:10:41.854 Could not set queue depth (nvme0n4) 00:10:42.112 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.112 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.112 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.112 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:42.112 fio-3.35 00:10:42.112 Starting 4 threads 00:10:43.487 00:10:43.487 job0: (groupid=0, jobs=1): err= 0: pid=188521: Sun Dec 15 05:10:56 2024 00:10:43.487 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:10:43.487 slat (nsec): min=1243, max=14996k, avg=104877.07, stdev=739838.50 00:10:43.487 clat (usec): min=3559, max=61151, avg=12762.66, stdev=6974.83 00:10:43.487 lat (usec): min=3568, max=61158, avg=12867.54, stdev=7027.70 00:10:43.487 clat percentiles (usec): 00:10:43.487 | 1.00th=[ 4686], 5.00th=[ 6718], 10.00th=[ 8848], 20.00th=[ 9503], 00:10:43.487 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10945], 60.00th=[11863], 00:10:43.487 | 70.00th=[13173], 80.00th=[15008], 90.00th=[17171], 95.00th=[20841], 00:10:43.487 | 99.00th=[54264], 99.50th=[57410], 99.90th=[61080], 99.95th=[61080], 00:10:43.487 | 99.99th=[61080] 00:10:43.487 write: IOPS=4688, BW=18.3MiB/s (19.2MB/s)(18.4MiB/1005msec); 0 zone resets 00:10:43.487 slat (usec): min=2, max=22144, avg=101.92, stdev=594.25 00:10:43.487 clat (usec): min=866, max=71364, avg=14586.56, stdev=11622.93 00:10:43.487 lat (usec): min=881, max=71370, avg=14688.48, stdev=11675.47 00:10:43.487 clat percentiles (usec): 00:10:43.487 | 1.00th=[ 3064], 5.00th=[ 5342], 10.00th=[ 6587], 20.00th=[ 8717], 00:10:43.487 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[11207], 00:10:43.487 | 70.00th=[13435], 80.00th=[17433], 90.00th=[28181], 95.00th=[42730], 00:10:43.487 | 99.00th=[63701], 99.50th=[69731], 99.90th=[71828], 99.95th=[71828], 00:10:43.487 | 99.99th=[71828] 00:10:43.487 bw ( KiB/s): min=16256, max=20656, per=25.78%, avg=18456.00, stdev=3111.27, samples=2 00:10:43.487 iops : min= 4064, max= 5164, avg=4614.00, stdev=777.82, samples=2 00:10:43.487 lat (usec) : 1000=0.03% 00:10:43.487 lat (msec) : 2=0.03%, 4=1.77%, 10=34.06%, 20=54.66%, 50=7.50% 00:10:43.487 lat (msec) : 100=1.95% 00:10:43.487 cpu : usr=3.69%, sys=5.58%, ctx=572, majf=0, minf=1 00:10:43.487 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:43.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.488 issued rwts: total=4608,4712,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.488 job1: (groupid=0, jobs=1): err= 0: pid=188522: Sun Dec 15 05:10:56 2024 00:10:43.488 read: IOPS=4566, BW=17.8MiB/s (18.7MB/s)(18.0MiB/1009msec) 00:10:43.488 slat (nsec): min=1073, max=19948k, avg=100810.95, stdev=838880.25 00:10:43.488 clat (usec): min=3382, max=52399, avg=13472.45, stdev=6162.82 00:10:43.488 lat (usec): min=6024, max=52402, avg=13573.26, stdev=6211.59 00:10:43.488 clat percentiles (usec): 00:10:43.488 | 1.00th=[ 6456], 5.00th=[ 7570], 10.00th=[ 9110], 20.00th=[ 9765], 00:10:43.488 | 30.00th=[10159], 40.00th=[10552], 50.00th=[11338], 60.00th=[12256], 00:10:43.488 | 70.00th=[14222], 80.00th=[15139], 90.00th=[22152], 95.00th=[27919], 00:10:43.488 | 99.00th=[33162], 99.50th=[43779], 99.90th=[47973], 99.95th=[47973], 00:10:43.488 | 99.99th=[52167] 00:10:43.488 write: IOPS=4958, BW=19.4MiB/s (20.3MB/s)(19.5MiB/1009msec); 0 zone resets 00:10:43.488 slat (nsec): min=1931, max=19889k, avg=100607.19, stdev=734125.58 00:10:43.488 clat (usec): min=2942, max=65029, avg=13038.62, stdev=9246.07 00:10:43.488 lat (usec): min=2953, max=65036, avg=13139.22, stdev=9311.38 00:10:43.488 clat percentiles (usec): 00:10:43.488 | 1.00th=[ 5276], 5.00th=[ 7439], 10.00th=[ 8160], 20.00th=[ 9110], 00:10:43.488 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10421], 60.00th=[10814], 00:10:43.488 | 70.00th=[11994], 80.00th=[12518], 90.00th=[20055], 95.00th=[27395], 00:10:43.488 | 99.00th=[60556], 99.50th=[63177], 99.90th=[65274], 99.95th=[65274], 00:10:43.488 | 99.99th=[65274] 00:10:43.488 bw ( KiB/s): min=17640, max=21368, per=27.24%, avg=19504.00, stdev=2636.09, samples=2 00:10:43.488 iops : min= 4410, max= 5342, avg=4876.00, stdev=659.02, samples=2 00:10:43.488 lat (msec) : 4=0.07%, 10=27.81%, 20=60.73%, 50=10.27%, 100=1.11% 00:10:43.488 cpu : usr=3.08%, sys=6.25%, ctx=297, majf=0, minf=1 00:10:43.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:10:43.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.488 issued rwts: total=4608,5003,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.488 job2: (groupid=0, jobs=1): err= 0: pid=188526: Sun Dec 15 05:10:56 2024 00:10:43.488 read: IOPS=4364, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1003msec) 00:10:43.488 slat (nsec): min=1200, max=21801k, avg=106288.48, stdev=929671.50 00:10:43.488 clat (usec): min=530, max=59308, avg=15068.41, stdev=10139.88 00:10:43.488 lat (usec): min=536, max=59333, avg=15174.70, stdev=10232.86 00:10:43.488 clat percentiles (usec): 00:10:43.488 | 1.00th=[ 1745], 5.00th=[ 5276], 10.00th=[ 6521], 20.00th=[ 9372], 00:10:43.488 | 30.00th=[10290], 40.00th=[10683], 50.00th=[11600], 60.00th=[12518], 00:10:43.488 | 70.00th=[14484], 80.00th=[21627], 90.00th=[26608], 95.00th=[42206], 00:10:43.488 | 99.00th=[48497], 99.50th=[51119], 99.90th=[57410], 99.95th=[57410], 00:10:43.488 | 99.99th=[59507] 00:10:43.488 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:43.488 slat (usec): min=2, max=18877, avg=81.42, stdev=689.71 00:10:43.488 clat (usec): min=385, max=55614, avg=13202.07, stdev=8090.53 00:10:43.488 lat (usec): min=396, max=55623, avg=13283.49, stdev=8129.55 00:10:43.488 clat percentiles (usec): 00:10:43.488 | 1.00th=[ 1057], 5.00th=[ 2933], 10.00th=[ 4817], 20.00th=[ 8225], 00:10:43.488 | 30.00th=[ 9634], 40.00th=[10683], 50.00th=[11207], 60.00th=[12649], 00:10:43.488 | 70.00th=[14615], 80.00th=[17171], 90.00th=[20841], 95.00th=[31589], 00:10:43.488 | 99.00th=[45876], 99.50th=[46924], 99.90th=[55313], 99.95th=[55313], 00:10:43.488 | 99.99th=[55837] 00:10:43.488 bw ( KiB/s): min=16352, max=20512, per=25.75%, avg=18432.00, stdev=2941.56, samples=2 00:10:43.488 iops : min= 4088, max= 5128, avg=4608.00, stdev=735.39, samples=2 00:10:43.488 lat (usec) : 500=0.06%, 750=0.07%, 1000=0.36% 00:10:43.488 lat (msec) : 2=1.74%, 4=3.58%, 10=22.02%, 20=55.33%, 50=16.30% 00:10:43.488 lat (msec) : 100=0.55% 00:10:43.488 cpu : usr=2.89%, sys=6.29%, ctx=338, majf=0, minf=1 00:10:43.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:43.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.488 issued rwts: total=4378,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.488 job3: (groupid=0, jobs=1): err= 0: pid=188529: Sun Dec 15 05:10:56 2024 00:10:43.488 read: IOPS=3541, BW=13.8MiB/s (14.5MB/s)(14.0MiB/1012msec) 00:10:43.488 slat (nsec): min=1317, max=18930k, avg=116821.42, stdev=936505.65 00:10:43.488 clat (usec): min=4722, max=43167, avg=15290.55, stdev=4799.75 00:10:43.488 lat (usec): min=4733, max=43178, avg=15407.37, stdev=4876.49 00:10:43.488 clat percentiles (usec): 00:10:43.488 | 1.00th=[ 8094], 5.00th=[ 9896], 10.00th=[10814], 20.00th=[11469], 00:10:43.488 | 30.00th=[12518], 40.00th=[13173], 50.00th=[14353], 60.00th=[14746], 00:10:43.488 | 70.00th=[16319], 80.00th=[19006], 90.00th=[21627], 95.00th=[25560], 00:10:43.488 | 99.00th=[31327], 99.50th=[31327], 99.90th=[31851], 99.95th=[35390], 00:10:43.488 | 99.99th=[43254] 00:10:43.488 write: IOPS=3745, BW=14.6MiB/s (15.3MB/s)(14.8MiB/1012msec); 0 zone resets 00:10:43.488 slat (usec): min=2, max=14324, avg=145.62, stdev=842.84 00:10:43.488 clat (usec): min=1673, max=80486, avg=19361.07, stdev=14055.74 00:10:43.488 lat (usec): min=1685, max=80493, avg=19506.69, stdev=14146.63 00:10:43.488 clat percentiles (usec): 00:10:43.488 | 1.00th=[ 4490], 5.00th=[ 8225], 10.00th=[ 9503], 20.00th=[10552], 00:10:43.488 | 30.00th=[11338], 40.00th=[11600], 50.00th=[12518], 60.00th=[14484], 00:10:43.488 | 70.00th=[19268], 80.00th=[30278], 90.00th=[39584], 95.00th=[45351], 00:10:43.488 | 99.00th=[72877], 99.50th=[78119], 99.90th=[80217], 99.95th=[80217], 00:10:43.488 | 99.99th=[80217] 00:10:43.488 bw ( KiB/s): min=13104, max=16200, per=20.47%, avg=14652.00, stdev=2189.20, samples=2 00:10:43.488 iops : min= 3276, max= 4050, avg=3663.00, stdev=547.30, samples=2 00:10:43.488 lat (msec) : 2=0.03%, 4=0.31%, 10=9.56%, 20=67.62%, 50=20.68% 00:10:43.488 lat (msec) : 100=1.80% 00:10:43.488 cpu : usr=3.26%, sys=5.24%, ctx=348, majf=0, minf=1 00:10:43.488 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:10:43.488 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:43.488 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:43.488 issued rwts: total=3584,3790,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:43.488 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:43.488 00:10:43.488 Run status group 0 (all jobs): 00:10:43.488 READ: bw=66.3MiB/s (69.5MB/s), 13.8MiB/s-17.9MiB/s (14.5MB/s-18.8MB/s), io=67.1MiB (70.4MB), run=1003-1012msec 00:10:43.488 WRITE: bw=69.9MiB/s (73.3MB/s), 14.6MiB/s-19.4MiB/s (15.3MB/s-20.3MB/s), io=70.8MiB (74.2MB), run=1003-1012msec 00:10:43.488 00:10:43.488 Disk stats (read/write): 00:10:43.488 nvme0n1: ios=3634/3703, merge=0/0, ticks=44813/53859, in_queue=98672, util=85.97% 00:10:43.488 nvme0n2: ios=4128/4102, merge=0/0, ticks=33165/23235, in_queue=56400, util=95.68% 00:10:43.488 nvme0n3: ios=3132/3468, merge=0/0, ticks=42715/41492, in_queue=84207, util=96.63% 00:10:43.488 nvme0n4: ios=3081/3079, merge=0/0, ticks=46642/52259, in_queue=98901, util=97.57% 00:10:43.488 05:10:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:43.488 [global] 00:10:43.488 thread=1 00:10:43.488 invalidate=1 00:10:43.488 rw=randwrite 00:10:43.488 time_based=1 00:10:43.488 runtime=1 00:10:43.488 ioengine=libaio 00:10:43.488 direct=1 00:10:43.488 bs=4096 00:10:43.488 iodepth=128 00:10:43.488 norandommap=0 00:10:43.488 numjobs=1 00:10:43.488 00:10:43.488 verify_dump=1 00:10:43.488 verify_backlog=512 00:10:43.488 verify_state_save=0 00:10:43.488 do_verify=1 00:10:43.488 verify=crc32c-intel 00:10:43.488 [job0] 00:10:43.488 filename=/dev/nvme0n1 00:10:43.488 [job1] 00:10:43.488 filename=/dev/nvme0n2 00:10:43.488 [job2] 00:10:43.488 filename=/dev/nvme0n3 00:10:43.488 [job3] 00:10:43.488 filename=/dev/nvme0n4 00:10:43.488 Could not set queue depth (nvme0n1) 00:10:43.488 Could not set queue depth (nvme0n2) 00:10:43.488 Could not set queue depth (nvme0n3) 00:10:43.488 Could not set queue depth (nvme0n4) 00:10:43.488 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.488 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.488 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.488 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:43.488 fio-3.35 00:10:43.488 Starting 4 threads 00:10:44.864 00:10:44.864 job0: (groupid=0, jobs=1): err= 0: pid=188966: Sun Dec 15 05:10:58 2024 00:10:44.864 read: IOPS=6257, BW=24.4MiB/s (25.6MB/s)(24.5MiB/1002msec) 00:10:44.864 slat (nsec): min=1282, max=11031k, avg=88176.64, stdev=666716.24 00:10:44.864 clat (usec): min=1237, max=22790, avg=10746.32, stdev=2810.10 00:10:44.864 lat (usec): min=3385, max=28582, avg=10834.50, stdev=2863.42 00:10:44.864 clat percentiles (usec): 00:10:44.864 | 1.00th=[ 4146], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 8094], 00:10:44.864 | 30.00th=[ 9110], 40.00th=[10159], 50.00th=[10945], 60.00th=[11207], 00:10:44.864 | 70.00th=[11469], 80.00th=[12256], 90.00th=[14615], 95.00th=[16450], 00:10:44.864 | 99.00th=[19006], 99.50th=[19792], 99.90th=[20841], 99.95th=[20841], 00:10:44.864 | 99.99th=[22676] 00:10:44.864 write: IOPS=6642, BW=25.9MiB/s (27.2MB/s)(26.0MiB/1002msec); 0 zone resets 00:10:44.864 slat (usec): min=2, max=9101, avg=62.29, stdev=347.64 00:10:44.865 clat (usec): min=1325, max=20955, avg=8938.72, stdev=2271.77 00:10:44.865 lat (usec): min=1338, max=20957, avg=9001.01, stdev=2309.44 00:10:44.865 clat percentiles (usec): 00:10:44.865 | 1.00th=[ 2573], 5.00th=[ 4490], 10.00th=[ 5866], 20.00th=[ 7439], 00:10:44.865 | 30.00th=[ 7963], 40.00th=[ 8160], 50.00th=[ 9241], 60.00th=[ 9896], 00:10:44.865 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:10:44.865 | 99.00th=[12125], 99.50th=[15008], 99.90th=[19792], 99.95th=[20055], 00:10:44.865 | 99.99th=[20841] 00:10:44.865 bw ( KiB/s): min=24576, max=28664, per=38.30%, avg=26620.00, stdev=2890.65, samples=2 00:10:44.865 iops : min= 6144, max= 7166, avg=6655.00, stdev=722.66, samples=2 00:10:44.865 lat (msec) : 2=0.03%, 4=2.28%, 10=47.93%, 20=49.53%, 50=0.23% 00:10:44.865 cpu : usr=4.50%, sys=5.49%, ctx=750, majf=0, minf=1 00:10:44.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:10:44.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.865 issued rwts: total=6270,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.865 job1: (groupid=0, jobs=1): err= 0: pid=188984: Sun Dec 15 05:10:58 2024 00:10:44.865 read: IOPS=2924, BW=11.4MiB/s (12.0MB/s)(11.5MiB/1005msec) 00:10:44.865 slat (nsec): min=1224, max=14158k, avg=145601.91, stdev=892800.26 00:10:44.865 clat (usec): min=2289, max=55064, avg=15538.62, stdev=9262.03 00:10:44.865 lat (usec): min=5278, max=55074, avg=15684.22, stdev=9341.83 00:10:44.865 clat percentiles (usec): 00:10:44.865 | 1.00th=[ 5604], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10814], 00:10:44.865 | 30.00th=[11338], 40.00th=[11600], 50.00th=[11731], 60.00th=[12911], 00:10:44.865 | 70.00th=[14353], 80.00th=[16450], 90.00th=[27919], 95.00th=[39584], 00:10:44.865 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:10:44.865 | 99.99th=[55313] 00:10:44.865 write: IOPS=3056, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1005msec); 0 zone resets 00:10:44.865 slat (usec): min=2, max=8722, avg=180.33, stdev=773.42 00:10:44.865 clat (usec): min=1535, max=58673, avg=26632.20, stdev=14201.94 00:10:44.865 lat (usec): min=1546, max=58685, avg=26812.53, stdev=14301.27 00:10:44.865 clat percentiles (usec): 00:10:44.865 | 1.00th=[ 3556], 5.00th=[ 8356], 10.00th=[ 9110], 20.00th=[11469], 00:10:44.865 | 30.00th=[17957], 40.00th=[20317], 50.00th=[23462], 60.00th=[29492], 00:10:44.865 | 70.00th=[35914], 80.00th=[40633], 90.00th=[46924], 95.00th=[52167], 00:10:44.865 | 99.00th=[55313], 99.50th=[57934], 99.90th=[58459], 99.95th=[58459], 00:10:44.865 | 99.99th=[58459] 00:10:44.865 bw ( KiB/s): min=10736, max=13840, per=17.68%, avg=12288.00, stdev=2194.86, samples=2 00:10:44.865 iops : min= 2684, max= 3460, avg=3072.00, stdev=548.71, samples=2 00:10:44.865 lat (msec) : 2=0.05%, 4=0.82%, 10=10.46%, 20=46.70%, 50=37.90% 00:10:44.865 lat (msec) : 100=4.08% 00:10:44.865 cpu : usr=2.59%, sys=4.18%, ctx=371, majf=0, minf=2 00:10:44.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:44.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.865 issued rwts: total=2939,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.865 job2: (groupid=0, jobs=1): err= 0: pid=189005: Sun Dec 15 05:10:58 2024 00:10:44.865 read: IOPS=3053, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1006msec) 00:10:44.865 slat (nsec): min=1647, max=27462k, avg=145148.68, stdev=1108602.37 00:10:44.865 clat (usec): min=9516, max=64427, avg=18716.65, stdev=9149.79 00:10:44.865 lat (usec): min=9526, max=64453, avg=18861.79, stdev=9245.89 00:10:44.865 clat percentiles (usec): 00:10:44.865 | 1.00th=[10552], 5.00th=[12780], 10.00th=[13304], 20.00th=[13829], 00:10:44.865 | 30.00th=[14222], 40.00th=[14615], 50.00th=[14877], 60.00th=[16057], 00:10:44.865 | 70.00th=[16909], 80.00th=[22152], 90.00th=[27657], 95.00th=[40633], 00:10:44.865 | 99.00th=[52691], 99.50th=[52691], 99.90th=[52691], 99.95th=[61080], 00:10:44.865 | 99.99th=[64226] 00:10:44.865 write: IOPS=3123, BW=12.2MiB/s (12.8MB/s)(12.3MiB/1006msec); 0 zone resets 00:10:44.865 slat (usec): min=2, max=13831, avg=168.76, stdev=861.19 00:10:44.865 clat (usec): min=4347, max=58663, avg=22139.82, stdev=11385.02 00:10:44.865 lat (usec): min=7438, max=58675, avg=22308.59, stdev=11448.92 00:10:44.865 clat percentiles (usec): 00:10:44.865 | 1.00th=[10814], 5.00th=[12387], 10.00th=[12518], 20.00th=[12780], 00:10:44.865 | 30.00th=[13173], 40.00th=[13960], 50.00th=[20055], 60.00th=[21365], 00:10:44.865 | 70.00th=[24249], 80.00th=[28967], 90.00th=[38011], 95.00th=[51643], 00:10:44.865 | 99.00th=[57934], 99.50th=[57934], 99.90th=[58459], 99.95th=[58459], 00:10:44.865 | 99.99th=[58459] 00:10:44.865 bw ( KiB/s): min= 8200, max=16376, per=17.68%, avg=12288.00, stdev=5781.31, samples=2 00:10:44.865 iops : min= 2050, max= 4094, avg=3072.00, stdev=1445.33, samples=2 00:10:44.865 lat (msec) : 10=0.53%, 20=62.62%, 50=32.49%, 100=4.36% 00:10:44.865 cpu : usr=3.18%, sys=4.68%, ctx=296, majf=0, minf=1 00:10:44.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=99.0% 00:10:44.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.865 issued rwts: total=3072,3142,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.865 job3: (groupid=0, jobs=1): err= 0: pid=189009: Sun Dec 15 05:10:58 2024 00:10:44.865 read: IOPS=4363, BW=17.0MiB/s (17.9MB/s)(17.1MiB/1005msec) 00:10:44.865 slat (nsec): min=994, max=19822k, avg=134815.58, stdev=965420.74 00:10:44.865 clat (msec): min=3, max=101, avg=16.55, stdev=15.72 00:10:44.865 lat (msec): min=5, max=101, avg=16.68, stdev=15.81 00:10:44.865 clat percentiles (msec): 00:10:44.865 | 1.00th=[ 6], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:10:44.865 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 12], 60.00th=[ 12], 00:10:44.865 | 70.00th=[ 13], 80.00th=[ 15], 90.00th=[ 33], 95.00th=[ 46], 00:10:44.865 | 99.00th=[ 90], 99.50th=[ 99], 99.90th=[ 102], 99.95th=[ 102], 00:10:44.865 | 99.99th=[ 102] 00:10:44.865 write: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec); 0 zone resets 00:10:44.865 slat (nsec): min=1816, max=11348k, avg=83483.11, stdev=439630.80 00:10:44.865 clat (usec): min=5587, max=30774, avg=11734.54, stdev=2871.04 00:10:44.865 lat (usec): min=5601, max=30785, avg=11818.03, stdev=2883.54 00:10:44.865 clat percentiles (usec): 00:10:44.865 | 1.00th=[ 6587], 5.00th=[ 8848], 10.00th=[10290], 20.00th=[10552], 00:10:44.865 | 30.00th=[10814], 40.00th=[10945], 50.00th=[11076], 60.00th=[11207], 00:10:44.865 | 70.00th=[11469], 80.00th=[11863], 90.00th=[14877], 95.00th=[17171], 00:10:44.865 | 99.00th=[27919], 99.50th=[28181], 99.90th=[30802], 99.95th=[30802], 00:10:44.865 | 99.99th=[30802] 00:10:44.865 bw ( KiB/s): min=15176, max=21688, per=26.52%, avg=18432.00, stdev=4604.68, samples=2 00:10:44.865 iops : min= 3794, max= 5422, avg=4608.00, stdev=1151.17, samples=2 00:10:44.865 lat (msec) : 4=0.01%, 10=11.79%, 20=78.76%, 50=7.17%, 100=2.13% 00:10:44.865 lat (msec) : 250=0.13% 00:10:44.865 cpu : usr=3.09%, sys=4.58%, ctx=484, majf=0, minf=2 00:10:44.865 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:44.865 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:44.865 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:44.865 issued rwts: total=4385,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:44.865 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:44.865 00:10:44.865 Run status group 0 (all jobs): 00:10:44.865 READ: bw=64.7MiB/s (67.9MB/s), 11.4MiB/s-24.4MiB/s (12.0MB/s-25.6MB/s), io=65.1MiB (68.3MB), run=1002-1006msec 00:10:44.865 WRITE: bw=67.9MiB/s (71.2MB/s), 11.9MiB/s-25.9MiB/s (12.5MB/s-27.2MB/s), io=68.3MiB (71.6MB), run=1002-1006msec 00:10:44.865 00:10:44.865 Disk stats (read/write): 00:10:44.865 nvme0n1: ios=5140/5422, merge=0/0, ticks=55659/49241, in_queue=104900, util=98.20% 00:10:44.865 nvme0n2: ios=2097/2447, merge=0/0, ticks=31694/72098, in_queue=103792, util=87.80% 00:10:44.865 nvme0n3: ios=2618/2999, merge=0/0, ticks=20981/31483, in_queue=52464, util=95.72% 00:10:44.865 nvme0n4: ios=4153/4608, merge=0/0, ticks=24548/23361, in_queue=47909, util=93.15% 00:10:44.865 05:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:44.865 05:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=189128 00:10:44.865 05:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:44.865 05:10:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:44.865 [global] 00:10:44.865 thread=1 00:10:44.865 invalidate=1 00:10:44.865 rw=read 00:10:44.865 time_based=1 00:10:44.865 runtime=10 00:10:44.865 ioengine=libaio 00:10:44.865 direct=1 00:10:44.865 bs=4096 00:10:44.865 iodepth=1 00:10:44.865 norandommap=1 00:10:44.865 numjobs=1 00:10:44.865 00:10:44.865 [job0] 00:10:44.865 filename=/dev/nvme0n1 00:10:44.865 [job1] 00:10:44.865 filename=/dev/nvme0n2 00:10:44.865 [job2] 00:10:44.865 filename=/dev/nvme0n3 00:10:44.866 [job3] 00:10:44.866 filename=/dev/nvme0n4 00:10:44.866 Could not set queue depth (nvme0n1) 00:10:44.866 Could not set queue depth (nvme0n2) 00:10:44.866 Could not set queue depth (nvme0n3) 00:10:44.866 Could not set queue depth (nvme0n4) 00:10:45.124 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.124 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.124 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.124 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:45.124 fio-3.35 00:10:45.124 Starting 4 threads 00:10:48.408 05:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:48.408 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=46428160, buflen=4096 00:10:48.408 fio: pid=189470, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:48.408 05:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:48.408 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44494848, buflen=4096 00:10:48.408 fio: pid=189469, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:48.408 05:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.408 05:11:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:48.408 05:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.408 05:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:48.408 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=10559488, buflen=4096 00:10:48.408 fio: pid=189466, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:48.667 05:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.667 05:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:48.667 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=16596992, buflen=4096 00:10:48.667 fio: pid=189468, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:10:48.667 00:10:48.667 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189466: Sun Dec 15 05:11:02 2024 00:10:48.667 read: IOPS=815, BW=3260KiB/s (3338kB/s)(10.1MiB/3163msec) 00:10:48.667 slat (nsec): min=6401, max=75019, avg=8011.56, stdev=2980.87 00:10:48.667 clat (usec): min=172, max=43925, avg=1209.10, stdev=6328.01 00:10:48.667 lat (usec): min=179, max=43947, avg=1217.11, stdev=6330.40 00:10:48.667 clat percentiles (usec): 00:10:48.667 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:10:48.667 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:10:48.667 | 70.00th=[ 215], 80.00th=[ 221], 90.00th=[ 227], 95.00th=[ 235], 00:10:48.667 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:10:48.667 | 99.99th=[43779] 00:10:48.667 bw ( KiB/s): min= 96, max=18512, per=10.00%, avg=3432.50, stdev=7415.06, samples=6 00:10:48.667 iops : min= 24, max= 4628, avg=858.00, stdev=1853.83, samples=6 00:10:48.667 lat (usec) : 250=96.86%, 500=0.62%, 750=0.04% 00:10:48.667 lat (msec) : 50=2.44% 00:10:48.667 cpu : usr=0.16%, sys=0.89%, ctx=2584, majf=0, minf=1 00:10:48.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.667 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.667 issued rwts: total=2579,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.667 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189468: Sun Dec 15 05:11:02 2024 00:10:48.667 read: IOPS=1206, BW=4824KiB/s (4940kB/s)(15.8MiB/3360msec) 00:10:48.667 slat (usec): min=3, max=15492, avg=14.67, stdev=298.58 00:10:48.667 clat (usec): min=192, max=42366, avg=808.19, stdev=4628.82 00:10:48.667 lat (usec): min=199, max=56667, avg=822.86, stdev=4704.70 00:10:48.667 clat percentiles (usec): 00:10:48.667 | 1.00th=[ 225], 5.00th=[ 245], 10.00th=[ 251], 20.00th=[ 258], 00:10:48.667 | 30.00th=[ 265], 40.00th=[ 269], 50.00th=[ 273], 60.00th=[ 281], 00:10:48.667 | 70.00th=[ 285], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:10:48.667 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:10:48.667 | 99.99th=[42206] 00:10:48.667 bw ( KiB/s): min= 248, max=14536, per=15.67%, avg=5377.00, stdev=6871.15, samples=6 00:10:48.667 iops : min= 62, max= 3634, avg=1344.17, stdev=1717.86, samples=6 00:10:48.667 lat (usec) : 250=9.99%, 500=88.63% 00:10:48.667 lat (msec) : 4=0.05%, 50=1.31% 00:10:48.667 cpu : usr=0.30%, sys=0.95%, ctx=4058, majf=0, minf=2 00:10:48.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.667 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.667 issued rwts: total=4053,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.667 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189469: Sun Dec 15 05:11:02 2024 00:10:48.667 read: IOPS=3705, BW=14.5MiB/s (15.2MB/s)(42.4MiB/2932msec) 00:10:48.667 slat (usec): min=7, max=11334, avg=10.90, stdev=130.64 00:10:48.667 clat (usec): min=177, max=1054, avg=254.94, stdev=26.02 00:10:48.667 lat (usec): min=185, max=11761, avg=265.84, stdev=135.51 00:10:48.667 clat percentiles (usec): 00:10:48.667 | 1.00th=[ 206], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:10:48.667 | 30.00th=[ 243], 40.00th=[ 247], 50.00th=[ 253], 60.00th=[ 258], 00:10:48.667 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 297], 00:10:48.667 | 99.00th=[ 318], 99.50th=[ 338], 99.90th=[ 453], 99.95th=[ 490], 00:10:48.667 | 99.99th=[ 709] 00:10:48.667 bw ( KiB/s): min=13680, max=15512, per=43.44%, avg=14908.80, stdev=793.63, samples=5 00:10:48.667 iops : min= 3420, max= 3878, avg=3727.20, stdev=198.41, samples=5 00:10:48.667 lat (usec) : 250=45.55%, 500=54.40%, 750=0.03% 00:10:48.667 lat (msec) : 2=0.01% 00:10:48.667 cpu : usr=2.39%, sys=6.28%, ctx=10866, majf=0, minf=2 00:10:48.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.667 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.667 issued rwts: total=10864,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.667 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=189470: Sun Dec 15 05:11:02 2024 00:10:48.667 read: IOPS=4183, BW=16.3MiB/s (17.1MB/s)(44.3MiB/2710msec) 00:10:48.667 slat (nsec): min=6464, max=30305, avg=7417.37, stdev=971.69 00:10:48.667 clat (usec): min=182, max=820, avg=229.77, stdev=17.66 00:10:48.667 lat (usec): min=189, max=827, avg=237.18, stdev=17.78 00:10:48.667 clat percentiles (usec): 00:10:48.667 | 1.00th=[ 200], 5.00th=[ 206], 10.00th=[ 212], 20.00th=[ 217], 00:10:48.667 | 30.00th=[ 221], 40.00th=[ 225], 50.00th=[ 229], 60.00th=[ 233], 00:10:48.667 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 258], 00:10:48.667 | 99.00th=[ 277], 99.50th=[ 285], 99.90th=[ 330], 99.95th=[ 420], 00:10:48.667 | 99.99th=[ 553] 00:10:48.667 bw ( KiB/s): min=15736, max=17144, per=48.89%, avg=16779.20, stdev=587.08, samples=5 00:10:48.667 iops : min= 3934, max= 4286, avg=4194.80, stdev=146.77, samples=5 00:10:48.667 lat (usec) : 250=90.75%, 500=9.22%, 750=0.02%, 1000=0.01% 00:10:48.667 cpu : usr=1.03%, sys=3.87%, ctx=11336, majf=0, minf=2 00:10:48.667 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:48.667 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.667 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:48.667 issued rwts: total=11336,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:48.667 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:48.667 00:10:48.667 Run status group 0 (all jobs): 00:10:48.667 READ: bw=33.5MiB/s (35.1MB/s), 3260KiB/s-16.3MiB/s (3338kB/s-17.1MB/s), io=113MiB (118MB), run=2710-3360msec 00:10:48.667 00:10:48.667 Disk stats (read/write): 00:10:48.667 nvme0n1: ios=2612/0, merge=0/0, ticks=3978/0, in_queue=3978, util=98.98% 00:10:48.667 nvme0n2: ios=4053/0, merge=0/0, ticks=3274/0, in_queue=3274, util=95.35% 00:10:48.667 nvme0n3: ios=10624/0, merge=0/0, ticks=2584/0, in_queue=2584, util=95.94% 00:10:48.667 nvme0n4: ios=10924/0, merge=0/0, ticks=2454/0, in_queue=2454, util=96.48% 00:10:48.925 05:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:48.925 05:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:49.184 05:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.184 05:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:49.441 05:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.441 05:11:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:49.441 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:49.441 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:49.700 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:49.700 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 189128 00:10:49.700 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:49.700 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:49.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:49.958 nvmf hotplug test: fio failed as expected 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:10:49.958 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:10:49.958 rmmod nvme_tcp 00:10:49.958 rmmod nvme_fabrics 00:10:50.217 rmmod nvme_keyring 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 186462 ']' 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 186462 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 186462 ']' 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 186462 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 186462 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 186462' 00:10:50.217 killing process with pid 186462 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 186462 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 186462 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:10:50.217 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:10:50.477 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:10:50.477 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:10:50.477 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:50.477 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:50.477 05:11:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.385 05:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:10:52.385 00:10:52.385 real 0m26.896s 00:10:52.385 user 1m46.172s 00:10:52.385 sys 0m9.007s 00:10:52.385 05:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:52.385 05:11:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:52.385 ************************************ 00:10:52.385 END TEST nvmf_fio_target 00:10:52.385 ************************************ 00:10:52.385 05:11:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:52.385 05:11:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:52.385 05:11:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:52.385 05:11:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:52.385 ************************************ 00:10:52.385 START TEST nvmf_bdevio 00:10:52.385 ************************************ 00:10:52.385 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:10:52.646 * Looking for test storage... 00:10:52.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:10:52.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.646 --rc genhtml_branch_coverage=1 00:10:52.646 --rc genhtml_function_coverage=1 00:10:52.646 --rc genhtml_legend=1 00:10:52.646 --rc geninfo_all_blocks=1 00:10:52.646 --rc geninfo_unexecuted_blocks=1 00:10:52.646 00:10:52.646 ' 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:10:52.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.646 --rc genhtml_branch_coverage=1 00:10:52.646 --rc genhtml_function_coverage=1 00:10:52.646 --rc genhtml_legend=1 00:10:52.646 --rc geninfo_all_blocks=1 00:10:52.646 --rc geninfo_unexecuted_blocks=1 00:10:52.646 00:10:52.646 ' 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:10:52.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.646 --rc genhtml_branch_coverage=1 00:10:52.646 --rc genhtml_function_coverage=1 00:10:52.646 --rc genhtml_legend=1 00:10:52.646 --rc geninfo_all_blocks=1 00:10:52.646 --rc geninfo_unexecuted_blocks=1 00:10:52.646 00:10:52.646 ' 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:10:52.646 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:52.646 --rc genhtml_branch_coverage=1 00:10:52.646 --rc genhtml_function_coverage=1 00:10:52.646 --rc genhtml_legend=1 00:10:52.646 --rc geninfo_all_blocks=1 00:10:52.646 --rc geninfo_unexecuted_blocks=1 00:10:52.646 00:10:52.646 ' 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:52.646 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:10:52.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:10:52.647 05:11:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.224 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:59.224 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:10:59.224 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:10:59.224 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:10:59.225 Found 0000:af:00.0 (0x8086 - 0x159b) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:10:59.225 Found 0000:af:00.1 (0x8086 - 0x159b) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:10:59.225 Found net devices under 0000:af:00.0: cvl_0_0 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:10:59.225 Found net devices under 0000:af:00.1: cvl_0_1 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:59.225 05:11:11 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:10:59.226 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:59.226 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:10:59.226 00:10:59.226 --- 10.0.0.2 ping statistics --- 00:10:59.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.226 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:59.226 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:59.226 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:10:59.226 00:10:59.226 --- 10.0.0.1 ping statistics --- 00:10:59.226 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:59.226 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=193678 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 193678 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 193678 ']' 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:59.226 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.226 [2024-12-15 05:11:12.219144] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:59.226 [2024-12-15 05:11:12.219195] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:59.226 [2024-12-15 05:11:12.298919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:59.226 [2024-12-15 05:11:12.321587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:59.226 [2024-12-15 05:11:12.321623] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:59.226 [2024-12-15 05:11:12.321630] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:59.226 [2024-12-15 05:11:12.321635] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:59.226 [2024-12-15 05:11:12.321640] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:59.226 [2024-12-15 05:11:12.323124] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:10:59.226 [2024-12-15 05:11:12.323234] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:10:59.226 [2024-12-15 05:11:12.323340] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:10:59.226 [2024-12-15 05:11:12.323342] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.226 [2024-12-15 05:11:12.454477] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.226 Malloc0 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:10:59.226 [2024-12-15 05:11:12.517464] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:10:59.226 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:10:59.226 { 00:10:59.226 "params": { 00:10:59.226 "name": "Nvme$subsystem", 00:10:59.226 "trtype": "$TEST_TRANSPORT", 00:10:59.226 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:59.226 "adrfam": "ipv4", 00:10:59.226 "trsvcid": "$NVMF_PORT", 00:10:59.226 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:59.226 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:59.226 "hdgst": ${hdgst:-false}, 00:10:59.226 "ddgst": ${ddgst:-false} 00:10:59.226 }, 00:10:59.226 "method": "bdev_nvme_attach_controller" 00:10:59.226 } 00:10:59.226 EOF 00:10:59.226 )") 00:10:59.227 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:10:59.227 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:10:59.227 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:10:59.227 05:11:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:10:59.227 "params": { 00:10:59.227 "name": "Nvme1", 00:10:59.227 "trtype": "tcp", 00:10:59.227 "traddr": "10.0.0.2", 00:10:59.227 "adrfam": "ipv4", 00:10:59.227 "trsvcid": "4420", 00:10:59.227 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:59.227 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:59.227 "hdgst": false, 00:10:59.227 "ddgst": false 00:10:59.227 }, 00:10:59.227 "method": "bdev_nvme_attach_controller" 00:10:59.227 }' 00:10:59.227 [2024-12-15 05:11:12.568574] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:10:59.227 [2024-12-15 05:11:12.568618] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid193889 ] 00:10:59.227 [2024-12-15 05:11:12.643506] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:10:59.227 [2024-12-15 05:11:12.668577] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:10:59.227 [2024-12-15 05:11:12.668684] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:59.227 [2024-12-15 05:11:12.668685] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:10:59.227 I/O targets: 00:10:59.227 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:10:59.227 00:10:59.227 00:10:59.227 CUnit - A unit testing framework for C - Version 2.1-3 00:10:59.227 http://cunit.sourceforge.net/ 00:10:59.227 00:10:59.227 00:10:59.227 Suite: bdevio tests on: Nvme1n1 00:10:59.227 Test: blockdev write read block ...passed 00:10:59.485 Test: blockdev write zeroes read block ...passed 00:10:59.485 Test: blockdev write zeroes read no split ...passed 00:10:59.485 Test: blockdev write zeroes read split ...passed 00:10:59.485 Test: blockdev write zeroes read split partial ...passed 00:10:59.485 Test: blockdev reset ...[2024-12-15 05:11:13.022573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:10:59.485 [2024-12-15 05:11:13.022634] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xbe5630 (9): Bad file descriptor 00:10:59.485 [2024-12-15 05:11:13.035205] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:10:59.485 passed 00:10:59.485 Test: blockdev write read 8 blocks ...passed 00:10:59.485 Test: blockdev write read size > 128k ...passed 00:10:59.485 Test: blockdev write read invalid size ...passed 00:10:59.485 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:10:59.485 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:10:59.485 Test: blockdev write read max offset ...passed 00:10:59.743 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:10:59.743 Test: blockdev writev readv 8 blocks ...passed 00:10:59.743 Test: blockdev writev readv 30 x 1block ...passed 00:10:59.743 Test: blockdev writev readv block ...passed 00:10:59.743 Test: blockdev writev readv size > 128k ...passed 00:10:59.743 Test: blockdev writev readv size > 128k in two iovs ...passed 00:10:59.743 Test: blockdev comparev and writev ...[2024-12-15 05:11:13.288770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:59.743 [2024-12-15 05:11:13.288802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:10:59.743 [2024-12-15 05:11:13.288816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:59.743 [2024-12-15 05:11:13.288824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:10:59.743 [2024-12-15 05:11:13.289059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:59.743 [2024-12-15 05:11:13.289070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:10:59.743 [2024-12-15 05:11:13.289082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:59.743 [2024-12-15 05:11:13.289089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:10:59.743 [2024-12-15 05:11:13.289318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:59.743 [2024-12-15 05:11:13.289329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:10:59.743 [2024-12-15 05:11:13.289341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:59.743 [2024-12-15 05:11:13.289348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:10:59.743 [2024-12-15 05:11:13.289583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:59.743 [2024-12-15 05:11:13.289594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:10:59.743 [2024-12-15 05:11:13.289606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:10:59.743 [2024-12-15 05:11:13.289613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:10:59.743 passed 00:10:59.743 Test: blockdev nvme passthru rw ...passed 00:10:59.743 Test: blockdev nvme passthru vendor specific ...[2024-12-15 05:11:13.372339] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:59.743 [2024-12-15 05:11:13.372355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:10:59.743 [2024-12-15 05:11:13.372458] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:59.743 [2024-12-15 05:11:13.372468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:10:59.743 [2024-12-15 05:11:13.372567] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:59.743 [2024-12-15 05:11:13.372577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:10:59.743 [2024-12-15 05:11:13.372682] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:10:59.743 [2024-12-15 05:11:13.372692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:10:59.743 passed 00:10:59.743 Test: blockdev nvme admin passthru ...passed 00:10:59.743 Test: blockdev copy ...passed 00:10:59.743 00:10:59.743 Run Summary: Type Total Ran Passed Failed Inactive 00:10:59.743 suites 1 1 n/a 0 0 00:10:59.743 tests 23 23 23 0 0 00:10:59.743 asserts 152 152 152 0 n/a 00:10:59.743 00:10:59.743 Elapsed time = 1.203 seconds 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:00.003 rmmod nvme_tcp 00:11:00.003 rmmod nvme_fabrics 00:11:00.003 rmmod nvme_keyring 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:11:00.003 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:11:00.004 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 193678 ']' 00:11:00.004 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 193678 00:11:00.004 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 193678 ']' 00:11:00.004 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 193678 00:11:00.004 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:11:00.004 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.004 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 193678 00:11:00.004 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:11:00.004 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:11:00.004 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 193678' 00:11:00.004 killing process with pid 193678 00:11:00.004 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 193678 00:11:00.004 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 193678 00:11:00.263 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:00.264 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:00.264 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:00.264 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:11:00.264 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:11:00.264 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:00.264 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:11:00.264 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:00.264 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:00.264 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.264 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.264 05:11:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:02.810 05:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:02.810 00:11:02.810 real 0m9.884s 00:11:02.810 user 0m9.824s 00:11:02.810 sys 0m4.935s 00:11:02.810 05:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.810 05:11:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:02.810 ************************************ 00:11:02.810 END TEST nvmf_bdevio 00:11:02.810 ************************************ 00:11:02.810 05:11:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:02.810 00:11:02.810 real 4m33.987s 00:11:02.810 user 10m16.332s 00:11:02.810 sys 1m35.212s 00:11:02.810 05:11:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:02.810 05:11:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:02.810 ************************************ 00:11:02.810 END TEST nvmf_target_core 00:11:02.810 ************************************ 00:11:02.810 05:11:16 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:02.810 05:11:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.810 05:11:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.810 05:11:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:02.810 ************************************ 00:11:02.810 START TEST nvmf_target_extra 00:11:02.810 ************************************ 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:02.810 * Looking for test storage... 00:11:02.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lcov --version 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:02.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.810 --rc genhtml_branch_coverage=1 00:11:02.810 --rc genhtml_function_coverage=1 00:11:02.810 --rc genhtml_legend=1 00:11:02.810 --rc geninfo_all_blocks=1 00:11:02.810 --rc geninfo_unexecuted_blocks=1 00:11:02.810 00:11:02.810 ' 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:02.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.810 --rc genhtml_branch_coverage=1 00:11:02.810 --rc genhtml_function_coverage=1 00:11:02.810 --rc genhtml_legend=1 00:11:02.810 --rc geninfo_all_blocks=1 00:11:02.810 --rc geninfo_unexecuted_blocks=1 00:11:02.810 00:11:02.810 ' 00:11:02.810 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:02.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.810 --rc genhtml_branch_coverage=1 00:11:02.810 --rc genhtml_function_coverage=1 00:11:02.810 --rc genhtml_legend=1 00:11:02.811 --rc geninfo_all_blocks=1 00:11:02.811 --rc geninfo_unexecuted_blocks=1 00:11:02.811 00:11:02.811 ' 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:02.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.811 --rc genhtml_branch_coverage=1 00:11:02.811 --rc genhtml_function_coverage=1 00:11:02.811 --rc genhtml_legend=1 00:11:02.811 --rc geninfo_all_blocks=1 00:11:02.811 --rc geninfo_unexecuted_blocks=1 00:11:02.811 00:11:02.811 ' 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # : 0 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.811 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:02.811 ************************************ 00:11:02.811 START TEST nvmf_example 00:11:02.811 ************************************ 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:02.811 * Looking for test storage... 00:11:02.811 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lcov --version 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:02.811 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:02.811 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.811 --rc genhtml_branch_coverage=1 00:11:02.812 --rc genhtml_function_coverage=1 00:11:02.812 --rc genhtml_legend=1 00:11:02.812 --rc geninfo_all_blocks=1 00:11:02.812 --rc geninfo_unexecuted_blocks=1 00:11:02.812 00:11:02.812 ' 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:02.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.812 --rc genhtml_branch_coverage=1 00:11:02.812 --rc genhtml_function_coverage=1 00:11:02.812 --rc genhtml_legend=1 00:11:02.812 --rc geninfo_all_blocks=1 00:11:02.812 --rc geninfo_unexecuted_blocks=1 00:11:02.812 00:11:02.812 ' 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:02.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.812 --rc genhtml_branch_coverage=1 00:11:02.812 --rc genhtml_function_coverage=1 00:11:02.812 --rc genhtml_legend=1 00:11:02.812 --rc geninfo_all_blocks=1 00:11:02.812 --rc geninfo_unexecuted_blocks=1 00:11:02.812 00:11:02.812 ' 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:02.812 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:02.812 --rc genhtml_branch_coverage=1 00:11:02.812 --rc genhtml_function_coverage=1 00:11:02.812 --rc genhtml_legend=1 00:11:02.812 --rc geninfo_all_blocks=1 00:11:02.812 --rc geninfo_unexecuted_blocks=1 00:11:02.812 00:11:02.812 ' 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # : 0 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:02.812 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:02.812 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # xtrace_disable 00:11:03.072 05:11:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # pci_devs=() 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # net_devs=() 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # e810=() 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # local -ga e810 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # x722=() 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # local -ga x722 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # mlx=() 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@322 -- # local -ga mlx 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:09.645 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:09.645 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:09.645 Found net devices under 0000:af:00.0: cvl_0_0 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:09.645 Found net devices under 0000:af:00.1: cvl_0_1 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@442 -- # is_hw=yes 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:09.645 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:09.646 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:09.646 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:11:09.646 00:11:09.646 --- 10.0.0.2 ping statistics --- 00:11:09.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.646 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:09.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:09.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.048 ms 00:11:09.646 00:11:09.646 --- 10.0.0.1 ping statistics --- 00:11:09.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:09.646 rtt min/avg/max/mdev = 0.048/0.048/0.048/0.000 ms 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # return 0 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=197641 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 197641 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 197641 ']' 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:09.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:09.646 05:11:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:19.617 Initializing NVMe Controllers 00:11:19.617 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:19.617 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:19.617 Initialization complete. Launching workers. 00:11:19.617 ======================================================== 00:11:19.617 Latency(us) 00:11:19.617 Device Information : IOPS MiB/s Average min max 00:11:19.617 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18401.80 71.88 3477.39 689.65 15595.39 00:11:19.617 ======================================================== 00:11:19.617 Total : 18401.80 71.88 3477.39 689.65 15595.39 00:11:19.617 00:11:19.617 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:19.617 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:19.617 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@516 -- # nvmfcleanup 00:11:19.617 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # sync 00:11:19.617 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:11:19.617 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set +e 00:11:19.617 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # for i in {1..20} 00:11:19.617 05:11:32 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:11:19.617 rmmod nvme_tcp 00:11:19.617 rmmod nvme_fabrics 00:11:19.617 rmmod nvme_keyring 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@128 -- # set -e 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # return 0 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@517 -- # '[' -n 197641 ']' 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@518 -- # killprocess 197641 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 197641 ']' 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 197641 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 197641 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 197641' 00:11:19.617 killing process with pid 197641 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 197641 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 197641 00:11:19.617 nvmf threads initialize successfully 00:11:19.617 bdev subsystem init successfully 00:11:19.617 created a nvmf target service 00:11:19.617 create targets's poll groups done 00:11:19.617 all subsystems of target started 00:11:19.617 nvmf target is running 00:11:19.617 all subsystems of target stopped 00:11:19.617 destroy targets's poll groups done 00:11:19.617 destroyed the nvmf target service 00:11:19.617 bdev subsystem finish successfully 00:11:19.617 nvmf threads destroy successfully 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # iptr 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-save 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # iptables-restore 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # remove_spdk_ns 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:19.617 05:11:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.162 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:11:22.162 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:22.162 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:22.162 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.162 00:11:22.162 real 0m19.078s 00:11:22.162 user 0m43.335s 00:11:22.162 sys 0m5.917s 00:11:22.162 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:22.162 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:22.162 ************************************ 00:11:22.162 END TEST nvmf_example 00:11:22.162 ************************************ 00:11:22.162 05:11:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:22.162 05:11:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:22.162 05:11:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:22.162 05:11:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:22.162 ************************************ 00:11:22.162 START TEST nvmf_filesystem 00:11:22.162 ************************************ 00:11:22.162 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:22.162 * Looking for test storage... 00:11:22.162 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.162 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:22.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.163 --rc genhtml_branch_coverage=1 00:11:22.163 --rc genhtml_function_coverage=1 00:11:22.163 --rc genhtml_legend=1 00:11:22.163 --rc geninfo_all_blocks=1 00:11:22.163 --rc geninfo_unexecuted_blocks=1 00:11:22.163 00:11:22.163 ' 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:22.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.163 --rc genhtml_branch_coverage=1 00:11:22.163 --rc genhtml_function_coverage=1 00:11:22.163 --rc genhtml_legend=1 00:11:22.163 --rc geninfo_all_blocks=1 00:11:22.163 --rc geninfo_unexecuted_blocks=1 00:11:22.163 00:11:22.163 ' 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:22.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.163 --rc genhtml_branch_coverage=1 00:11:22.163 --rc genhtml_function_coverage=1 00:11:22.163 --rc genhtml_legend=1 00:11:22.163 --rc geninfo_all_blocks=1 00:11:22.163 --rc geninfo_unexecuted_blocks=1 00:11:22.163 00:11:22.163 ' 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:22.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.163 --rc genhtml_branch_coverage=1 00:11:22.163 --rc genhtml_function_coverage=1 00:11:22.163 --rc genhtml_legend=1 00:11:22.163 --rc geninfo_all_blocks=1 00:11:22.163 --rc geninfo_unexecuted_blocks=1 00:11:22.163 00:11:22.163 ' 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:22.163 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR=//var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:22.164 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:22.164 #define SPDK_CONFIG_H 00:11:22.164 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:22.164 #define SPDK_CONFIG_APPS 1 00:11:22.164 #define SPDK_CONFIG_ARCH native 00:11:22.164 #undef SPDK_CONFIG_ASAN 00:11:22.164 #undef SPDK_CONFIG_AVAHI 00:11:22.164 #undef SPDK_CONFIG_CET 00:11:22.164 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:22.164 #define SPDK_CONFIG_COVERAGE 1 00:11:22.164 #define SPDK_CONFIG_CROSS_PREFIX 00:11:22.164 #undef SPDK_CONFIG_CRYPTO 00:11:22.164 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:22.164 #undef SPDK_CONFIG_CUSTOMOCF 00:11:22.164 #undef SPDK_CONFIG_DAOS 00:11:22.164 #define SPDK_CONFIG_DAOS_DIR 00:11:22.164 #define SPDK_CONFIG_DEBUG 1 00:11:22.164 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:22.164 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:22.164 #define SPDK_CONFIG_DPDK_INC_DIR //var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/include 00:11:22.164 #define SPDK_CONFIG_DPDK_LIB_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:22.164 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:22.164 #undef SPDK_CONFIG_DPDK_UADK 00:11:22.164 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:22.164 #define SPDK_CONFIG_EXAMPLES 1 00:11:22.164 #undef SPDK_CONFIG_FC 00:11:22.164 #define SPDK_CONFIG_FC_PATH 00:11:22.164 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:22.164 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:22.164 #define SPDK_CONFIG_FSDEV 1 00:11:22.164 #undef SPDK_CONFIG_FUSE 00:11:22.164 #undef SPDK_CONFIG_FUZZER 00:11:22.164 #define SPDK_CONFIG_FUZZER_LIB 00:11:22.164 #undef SPDK_CONFIG_GOLANG 00:11:22.164 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:22.164 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:22.164 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:22.164 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:22.164 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:22.164 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:22.164 #undef SPDK_CONFIG_HAVE_LZ4 00:11:22.164 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:22.164 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:22.164 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:22.164 #define SPDK_CONFIG_IDXD 1 00:11:22.164 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:22.164 #undef SPDK_CONFIG_IPSEC_MB 00:11:22.164 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:22.164 #define SPDK_CONFIG_ISAL 1 00:11:22.164 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:22.164 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:22.164 #define SPDK_CONFIG_LIBDIR 00:11:22.164 #undef SPDK_CONFIG_LTO 00:11:22.164 #define SPDK_CONFIG_MAX_LCORES 128 00:11:22.164 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:22.164 #define SPDK_CONFIG_NVME_CUSE 1 00:11:22.164 #undef SPDK_CONFIG_OCF 00:11:22.164 #define SPDK_CONFIG_OCF_PATH 00:11:22.164 #define SPDK_CONFIG_OPENSSL_PATH 00:11:22.164 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:22.164 #define SPDK_CONFIG_PGO_DIR 00:11:22.164 #undef SPDK_CONFIG_PGO_USE 00:11:22.164 #define SPDK_CONFIG_PREFIX /usr/local 00:11:22.164 #undef SPDK_CONFIG_RAID5F 00:11:22.164 #undef SPDK_CONFIG_RBD 00:11:22.164 #define SPDK_CONFIG_RDMA 1 00:11:22.164 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:22.164 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:22.164 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:22.164 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:22.165 #define SPDK_CONFIG_SHARED 1 00:11:22.165 #undef SPDK_CONFIG_SMA 00:11:22.165 #define SPDK_CONFIG_TESTS 1 00:11:22.165 #undef SPDK_CONFIG_TSAN 00:11:22.165 #define SPDK_CONFIG_UBLK 1 00:11:22.165 #define SPDK_CONFIG_UBSAN 1 00:11:22.165 #undef SPDK_CONFIG_UNIT_TESTS 00:11:22.165 #undef SPDK_CONFIG_URING 00:11:22.165 #define SPDK_CONFIG_URING_PATH 00:11:22.165 #undef SPDK_CONFIG_URING_ZNS 00:11:22.165 #undef SPDK_CONFIG_USDT 00:11:22.165 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:22.165 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:22.165 #define SPDK_CONFIG_VFIO_USER 1 00:11:22.165 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:22.165 #define SPDK_CONFIG_VHOST 1 00:11:22.165 #define SPDK_CONFIG_VIRTIO 1 00:11:22.165 #undef SPDK_CONFIG_VTUNE 00:11:22.165 #define SPDK_CONFIG_VTUNE_DIR 00:11:22.165 #define SPDK_CONFIG_WERROR 1 00:11:22.165 #define SPDK_CONFIG_WPDK_DIR 00:11:22.165 #undef SPDK_CONFIG_XNVME 00:11:22.165 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 1 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:22.165 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : /var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : v22.11.4 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:22.166 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 199778 ]] 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 199778 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.Zfugaq 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.Zfugaq/tests/target /tmp/spdk.Zfugaq 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:11:22.167 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=722997248 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=4561432576 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=88898068480 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=95552405504 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6654337024 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47766171648 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=10031104 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19087470592 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19110481920 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23011328 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=47775887360 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=47776202752 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=315392 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=9555226624 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=9555238912 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:22.168 * Looking for test storage... 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=88898068480 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=8868929536 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1703 -- # true 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lcov --version 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:11:22.168 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:11:22.429 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:11:22.429 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:22.429 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:11:22.429 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:11:22.429 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:22.429 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:22.429 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:11:22.429 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:22.429 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:22.429 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.429 --rc genhtml_branch_coverage=1 00:11:22.429 --rc genhtml_function_coverage=1 00:11:22.430 --rc genhtml_legend=1 00:11:22.430 --rc geninfo_all_blocks=1 00:11:22.430 --rc geninfo_unexecuted_blocks=1 00:11:22.430 00:11:22.430 ' 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:22.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.430 --rc genhtml_branch_coverage=1 00:11:22.430 --rc genhtml_function_coverage=1 00:11:22.430 --rc genhtml_legend=1 00:11:22.430 --rc geninfo_all_blocks=1 00:11:22.430 --rc geninfo_unexecuted_blocks=1 00:11:22.430 00:11:22.430 ' 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:22.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.430 --rc genhtml_branch_coverage=1 00:11:22.430 --rc genhtml_function_coverage=1 00:11:22.430 --rc genhtml_legend=1 00:11:22.430 --rc geninfo_all_blocks=1 00:11:22.430 --rc geninfo_unexecuted_blocks=1 00:11:22.430 00:11:22.430 ' 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:22.430 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:22.430 --rc genhtml_branch_coverage=1 00:11:22.430 --rc genhtml_function_coverage=1 00:11:22.430 --rc genhtml_legend=1 00:11:22.430 --rc geninfo_all_blocks=1 00:11:22.430 --rc geninfo_unexecuted_blocks=1 00:11:22.430 00:11:22.430 ' 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # : 0 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:11:22.430 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # xtrace_disable 00:11:22.430 05:11:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # pci_devs=() 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # net_devs=() 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # e810=() 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # local -ga e810 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # x722=() 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # local -ga x722 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # mlx=() 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@322 -- # local -ga mlx 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:11:29.004 Found 0000:af:00.0 (0x8086 - 0x159b) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:11:29.004 Found 0000:af:00.1 (0x8086 - 0x159b) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:11:29.004 Found net devices under 0000:af:00.0: cvl_0_0 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:11:29.004 Found net devices under 0000:af:00.1: cvl_0_1 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@442 -- # is_hw=yes 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:11:29.004 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:11:29.005 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:29.005 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.336 ms 00:11:29.005 00:11:29.005 --- 10.0.0.2 ping statistics --- 00:11:29.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.005 rtt min/avg/max/mdev = 0.336/0.336/0.336/0.000 ms 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:29.005 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:29.005 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:11:29.005 00:11:29.005 --- 10.0.0.1 ping statistics --- 00:11:29.005 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:29.005 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # return 0 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:29.005 ************************************ 00:11:29.005 START TEST nvmf_filesystem_no_in_capsule 00:11:29.005 ************************************ 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=202965 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 202965 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 202965 ']' 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:29.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:29.005 05:11:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.005 [2024-12-15 05:11:41.922067] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:29.005 [2024-12-15 05:11:41.922105] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:29.005 [2024-12-15 05:11:41.999621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:29.005 [2024-12-15 05:11:42.023299] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:29.005 [2024-12-15 05:11:42.023333] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:29.005 [2024-12-15 05:11:42.023341] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:29.005 [2024-12-15 05:11:42.023347] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:29.005 [2024-12-15 05:11:42.023352] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:29.005 [2024-12-15 05:11:42.024606] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:29.005 [2024-12-15 05:11:42.024717] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:29.005 [2024-12-15 05:11:42.024801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:29.005 [2024-12-15 05:11:42.024803] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.005 [2024-12-15 05:11:42.165153] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.005 Malloc1 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.005 [2024-12-15 05:11:42.327165] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:29.005 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:29.006 { 00:11:29.006 "name": "Malloc1", 00:11:29.006 "aliases": [ 00:11:29.006 "c4f74642-3613-46ed-9949-ed4938eeb37e" 00:11:29.006 ], 00:11:29.006 "product_name": "Malloc disk", 00:11:29.006 "block_size": 512, 00:11:29.006 "num_blocks": 1048576, 00:11:29.006 "uuid": "c4f74642-3613-46ed-9949-ed4938eeb37e", 00:11:29.006 "assigned_rate_limits": { 00:11:29.006 "rw_ios_per_sec": 0, 00:11:29.006 "rw_mbytes_per_sec": 0, 00:11:29.006 "r_mbytes_per_sec": 0, 00:11:29.006 "w_mbytes_per_sec": 0 00:11:29.006 }, 00:11:29.006 "claimed": true, 00:11:29.006 "claim_type": "exclusive_write", 00:11:29.006 "zoned": false, 00:11:29.006 "supported_io_types": { 00:11:29.006 "read": true, 00:11:29.006 "write": true, 00:11:29.006 "unmap": true, 00:11:29.006 "flush": true, 00:11:29.006 "reset": true, 00:11:29.006 "nvme_admin": false, 00:11:29.006 "nvme_io": false, 00:11:29.006 "nvme_io_md": false, 00:11:29.006 "write_zeroes": true, 00:11:29.006 "zcopy": true, 00:11:29.006 "get_zone_info": false, 00:11:29.006 "zone_management": false, 00:11:29.006 "zone_append": false, 00:11:29.006 "compare": false, 00:11:29.006 "compare_and_write": false, 00:11:29.006 "abort": true, 00:11:29.006 "seek_hole": false, 00:11:29.006 "seek_data": false, 00:11:29.006 "copy": true, 00:11:29.006 "nvme_iov_md": false 00:11:29.006 }, 00:11:29.006 "memory_domains": [ 00:11:29.006 { 00:11:29.006 "dma_device_id": "system", 00:11:29.006 "dma_device_type": 1 00:11:29.006 }, 00:11:29.006 { 00:11:29.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:29.006 "dma_device_type": 2 00:11:29.006 } 00:11:29.006 ], 00:11:29.006 "driver_specific": {} 00:11:29.006 } 00:11:29.006 ]' 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:29.006 05:11:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:29.943 05:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:29.943 05:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:29.943 05:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:29.943 05:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:29.943 05:11:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:32.478 05:11:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:32.737 05:11:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:34.114 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:34.114 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:34.114 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:34.114 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.114 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:34.114 ************************************ 00:11:34.114 START TEST filesystem_ext4 00:11:34.114 ************************************ 00:11:34.115 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:34.115 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:34.115 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:34.115 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:34.115 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:34.115 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:34.115 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:34.115 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:34.115 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:34.115 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:34.115 05:11:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:34.115 mke2fs 1.47.0 (5-Feb-2023) 00:11:34.115 Discarding device blocks: 0/522240 done 00:11:34.115 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:34.115 Filesystem UUID: 8927e3cc-bb19-4b35-9d7c-9af2bdacce51 00:11:34.115 Superblock backups stored on blocks: 00:11:34.115 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:34.115 00:11:34.115 Allocating group tables: 0/64 done 00:11:34.115 Writing inode tables: 0/64 done 00:11:34.115 Creating journal (8192 blocks): done 00:11:36.060 Writing superblocks and filesystem accounting information: 0/64 1/64 done 00:11:36.060 00:11:36.060 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:36.060 05:11:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 202965 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:42.627 00:11:42.627 real 0m8.335s 00:11:42.627 user 0m0.025s 00:11:42.627 sys 0m0.126s 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:42.627 ************************************ 00:11:42.627 END TEST filesystem_ext4 00:11:42.627 ************************************ 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.627 ************************************ 00:11:42.627 START TEST filesystem_btrfs 00:11:42.627 ************************************ 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:42.627 05:11:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:42.627 btrfs-progs v6.8.1 00:11:42.627 See https://btrfs.readthedocs.io for more information. 00:11:42.627 00:11:42.627 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:42.627 NOTE: several default settings have changed in version 5.15, please make sure 00:11:42.627 this does not affect your deployments: 00:11:42.627 - DUP for metadata (-m dup) 00:11:42.627 - enabled no-holes (-O no-holes) 00:11:42.627 - enabled free-space-tree (-R free-space-tree) 00:11:42.627 00:11:42.627 Label: (null) 00:11:42.627 UUID: 5e5594b7-fd7e-4583-a23d-b3502f791d96 00:11:42.627 Node size: 16384 00:11:42.627 Sector size: 4096 (CPU page size: 4096) 00:11:42.627 Filesystem size: 510.00MiB 00:11:42.627 Block group profiles: 00:11:42.627 Data: single 8.00MiB 00:11:42.627 Metadata: DUP 32.00MiB 00:11:42.627 System: DUP 8.00MiB 00:11:42.627 SSD detected: yes 00:11:42.627 Zoned device: no 00:11:42.627 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:42.627 Checksum: crc32c 00:11:42.627 Number of devices: 1 00:11:42.627 Devices: 00:11:42.627 ID SIZE PATH 00:11:42.627 1 510.00MiB /dev/nvme0n1p1 00:11:42.627 00:11:42.627 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:42.627 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:42.886 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:42.886 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:42.886 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:42.886 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:42.886 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:42.886 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 202965 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:43.146 00:11:43.146 real 0m0.753s 00:11:43.146 user 0m0.028s 00:11:43.146 sys 0m0.151s 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:43.146 ************************************ 00:11:43.146 END TEST filesystem_btrfs 00:11:43.146 ************************************ 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.146 ************************************ 00:11:43.146 START TEST filesystem_xfs 00:11:43.146 ************************************ 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:43.146 05:11:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:43.146 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:43.146 = sectsz=512 attr=2, projid32bit=1 00:11:43.146 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:43.146 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:43.146 data = bsize=4096 blocks=130560, imaxpct=25 00:11:43.146 = sunit=0 swidth=0 blks 00:11:43.146 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:43.146 log =internal log bsize=4096 blocks=16384, version=2 00:11:43.146 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:43.146 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:44.081 Discarding blocks...Done. 00:11:44.081 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:44.081 05:11:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:47.370 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:47.370 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:47.370 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:47.370 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 202965 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:47.371 00:11:47.371 real 0m3.804s 00:11:47.371 user 0m0.027s 00:11:47.371 sys 0m0.115s 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:47.371 ************************************ 00:11:47.371 END TEST filesystem_xfs 00:11:47.371 ************************************ 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.371 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 202965 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 202965 ']' 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 202965 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 202965 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 202965' 00:11:47.371 killing process with pid 202965 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 202965 00:11:47.371 05:12:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 202965 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:47.631 00:11:47.631 real 0m19.205s 00:11:47.631 user 1m15.687s 00:11:47.631 sys 0m1.613s 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.631 ************************************ 00:11:47.631 END TEST nvmf_filesystem_no_in_capsule 00:11:47.631 ************************************ 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:47.631 ************************************ 00:11:47.631 START TEST nvmf_filesystem_in_capsule 00:11:47.631 ************************************ 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@509 -- # nvmfpid=206411 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@510 -- # waitforlisten 206411 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 206411 ']' 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:47.631 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.631 [2024-12-15 05:12:01.187982] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:11:47.631 [2024-12-15 05:12:01.188045] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.631 [2024-12-15 05:12:01.265143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.631 [2024-12-15 05:12:01.288109] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.631 [2024-12-15 05:12:01.288148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.631 [2024-12-15 05:12:01.288155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.631 [2024-12-15 05:12:01.288162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.631 [2024-12-15 05:12:01.288167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.631 [2024-12-15 05:12:01.289603] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.631 [2024-12-15 05:12:01.289643] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.631 [2024-12-15 05:12:01.289752] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:47.631 [2024-12-15 05:12:01.289753] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.891 [2024-12-15 05:12:01.422826] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.891 Malloc1 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:47.891 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.150 [2024-12-15 05:12:01.578161] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:11:48.150 { 00:11:48.150 "name": "Malloc1", 00:11:48.150 "aliases": [ 00:11:48.150 "d220d1cb-6c3f-4e9b-911b-3bbb8d43e75a" 00:11:48.150 ], 00:11:48.150 "product_name": "Malloc disk", 00:11:48.150 "block_size": 512, 00:11:48.150 "num_blocks": 1048576, 00:11:48.150 "uuid": "d220d1cb-6c3f-4e9b-911b-3bbb8d43e75a", 00:11:48.150 "assigned_rate_limits": { 00:11:48.150 "rw_ios_per_sec": 0, 00:11:48.150 "rw_mbytes_per_sec": 0, 00:11:48.150 "r_mbytes_per_sec": 0, 00:11:48.150 "w_mbytes_per_sec": 0 00:11:48.150 }, 00:11:48.150 "claimed": true, 00:11:48.150 "claim_type": "exclusive_write", 00:11:48.150 "zoned": false, 00:11:48.150 "supported_io_types": { 00:11:48.150 "read": true, 00:11:48.150 "write": true, 00:11:48.150 "unmap": true, 00:11:48.150 "flush": true, 00:11:48.150 "reset": true, 00:11:48.150 "nvme_admin": false, 00:11:48.150 "nvme_io": false, 00:11:48.150 "nvme_io_md": false, 00:11:48.150 "write_zeroes": true, 00:11:48.150 "zcopy": true, 00:11:48.150 "get_zone_info": false, 00:11:48.150 "zone_management": false, 00:11:48.150 "zone_append": false, 00:11:48.150 "compare": false, 00:11:48.150 "compare_and_write": false, 00:11:48.150 "abort": true, 00:11:48.150 "seek_hole": false, 00:11:48.150 "seek_data": false, 00:11:48.150 "copy": true, 00:11:48.150 "nvme_iov_md": false 00:11:48.150 }, 00:11:48.150 "memory_domains": [ 00:11:48.150 { 00:11:48.150 "dma_device_id": "system", 00:11:48.150 "dma_device_type": 1 00:11:48.150 }, 00:11:48.150 { 00:11:48.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.150 "dma_device_type": 2 00:11:48.150 } 00:11:48.150 ], 00:11:48.150 "driver_specific": {} 00:11:48.150 } 00:11:48.150 ]' 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:48.150 05:12:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.529 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:49.529 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:11:49.529 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.529 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:49.529 05:12:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:11:51.434 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:51.434 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:51.434 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:51.434 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:51.434 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:51.434 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:11:51.434 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:51.434 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:51.434 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:51.434 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:51.434 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:51.435 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:51.435 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:51.435 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:51.435 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:51.435 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:51.435 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:51.435 05:12:04 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:52.001 05:12:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:52.939 ************************************ 00:11:52.939 START TEST filesystem_in_capsule_ext4 00:11:52.939 ************************************ 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:11:52.939 05:12:06 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:52.939 mke2fs 1.47.0 (5-Feb-2023) 00:11:53.199 Discarding device blocks: 0/522240 done 00:11:53.199 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:53.199 Filesystem UUID: 6787e571-c0ba-4447-9658-9dc1f017b92a 00:11:53.199 Superblock backups stored on blocks: 00:11:53.199 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:53.199 00:11:53.199 Allocating group tables: 0/64 done 00:11:53.199 Writing inode tables: 0/64 done 00:11:53.199 Creating journal (8192 blocks): done 00:11:54.653 Writing superblocks and filesystem accounting information: 0/6410/64 done 00:11:54.653 00:11:54.653 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:54.653 05:12:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 206411 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:01.222 00:12:01.222 real 0m7.257s 00:12:01.222 user 0m0.023s 00:12:01.222 sys 0m0.074s 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:12:01.222 ************************************ 00:12:01.222 END TEST filesystem_in_capsule_ext4 00:12:01.222 ************************************ 00:12:01.222 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.223 ************************************ 00:12:01.223 START TEST filesystem_in_capsule_btrfs 00:12:01.223 ************************************ 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:12:01.223 05:12:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:12:01.223 btrfs-progs v6.8.1 00:12:01.223 See https://btrfs.readthedocs.io for more information. 00:12:01.223 00:12:01.223 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:12:01.223 NOTE: several default settings have changed in version 5.15, please make sure 00:12:01.223 this does not affect your deployments: 00:12:01.223 - DUP for metadata (-m dup) 00:12:01.223 - enabled no-holes (-O no-holes) 00:12:01.223 - enabled free-space-tree (-R free-space-tree) 00:12:01.223 00:12:01.223 Label: (null) 00:12:01.223 UUID: cffb1aeb-f972-4827-ac30-408f020c29c1 00:12:01.223 Node size: 16384 00:12:01.223 Sector size: 4096 (CPU page size: 4096) 00:12:01.223 Filesystem size: 510.00MiB 00:12:01.223 Block group profiles: 00:12:01.223 Data: single 8.00MiB 00:12:01.223 Metadata: DUP 32.00MiB 00:12:01.223 System: DUP 8.00MiB 00:12:01.223 SSD detected: yes 00:12:01.223 Zoned device: no 00:12:01.223 Features: extref, skinny-metadata, no-holes, free-space-tree 00:12:01.223 Checksum: crc32c 00:12:01.223 Number of devices: 1 00:12:01.223 Devices: 00:12:01.223 ID SIZE PATH 00:12:01.223 1 510.00MiB /dev/nvme0n1p1 00:12:01.223 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 206411 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:01.223 00:12:01.223 real 0m0.702s 00:12:01.223 user 0m0.033s 00:12:01.223 sys 0m0.107s 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:12:01.223 ************************************ 00:12:01.223 END TEST filesystem_in_capsule_btrfs 00:12:01.223 ************************************ 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:01.223 ************************************ 00:12:01.223 START TEST filesystem_in_capsule_xfs 00:12:01.223 ************************************ 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:12:01.223 05:12:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:12:01.223 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:12:01.223 = sectsz=512 attr=2, projid32bit=1 00:12:01.223 = crc=1 finobt=1, sparse=1, rmapbt=0 00:12:01.223 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:12:01.223 data = bsize=4096 blocks=130560, imaxpct=25 00:12:01.223 = sunit=0 swidth=0 blks 00:12:01.223 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:12:01.223 log =internal log bsize=4096 blocks=16384, version=2 00:12:01.223 = sectsz=512 sunit=0 blks, lazy-count=1 00:12:01.223 realtime =none extsz=4096 blocks=0, rtextents=0 00:12:02.163 Discarding blocks...Done. 00:12:02.163 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:12:02.163 05:12:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:12:04.697 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:12:04.697 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:12:04.697 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:12:04.697 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:12:04.697 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:12:04.697 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:12:04.698 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 206411 00:12:04.698 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:12:04.698 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:12:04.698 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:12:04.698 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:12:04.698 00:12:04.698 real 0m3.248s 00:12:04.698 user 0m0.031s 00:12:04.698 sys 0m0.067s 00:12:04.698 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:04.698 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:12:04.698 ************************************ 00:12:04.698 END TEST filesystem_in_capsule_xfs 00:12:04.698 ************************************ 00:12:04.698 05:12:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.698 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 206411 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 206411 ']' 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 206411 00:12:04.698 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:12:04.957 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:04.957 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 206411 00:12:04.957 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:04.957 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:04.957 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 206411' 00:12:04.957 killing process with pid 206411 00:12:04.957 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 206411 00:12:04.957 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 206411 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:05.218 00:12:05.218 real 0m17.613s 00:12:05.218 user 1m9.395s 00:12:05.218 sys 0m1.408s 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:05.218 ************************************ 00:12:05.218 END TEST nvmf_filesystem_in_capsule 00:12:05.218 ************************************ 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # sync 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set +e 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:05.218 rmmod nvme_tcp 00:12:05.218 rmmod nvme_fabrics 00:12:05.218 rmmod nvme_keyring 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@128 -- # set -e 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # return 0 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # iptr 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-save 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@791 -- # iptables-restore 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:05.218 05:12:18 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.758 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:07.758 00:12:07.758 real 0m45.494s 00:12:07.758 user 2m27.162s 00:12:07.758 sys 0m7.647s 00:12:07.758 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:07.758 05:12:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:07.758 ************************************ 00:12:07.758 END TEST nvmf_filesystem 00:12:07.758 ************************************ 00:12:07.758 05:12:20 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:07.758 05:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:07.758 05:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:07.758 05:12:20 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:07.758 ************************************ 00:12:07.758 START TEST nvmf_target_discovery 00:12:07.758 ************************************ 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:07.759 * Looking for test storage... 00:12:07.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.759 --rc genhtml_branch_coverage=1 00:12:07.759 --rc genhtml_function_coverage=1 00:12:07.759 --rc genhtml_legend=1 00:12:07.759 --rc geninfo_all_blocks=1 00:12:07.759 --rc geninfo_unexecuted_blocks=1 00:12:07.759 00:12:07.759 ' 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.759 --rc genhtml_branch_coverage=1 00:12:07.759 --rc genhtml_function_coverage=1 00:12:07.759 --rc genhtml_legend=1 00:12:07.759 --rc geninfo_all_blocks=1 00:12:07.759 --rc geninfo_unexecuted_blocks=1 00:12:07.759 00:12:07.759 ' 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.759 --rc genhtml_branch_coverage=1 00:12:07.759 --rc genhtml_function_coverage=1 00:12:07.759 --rc genhtml_legend=1 00:12:07.759 --rc geninfo_all_blocks=1 00:12:07.759 --rc geninfo_unexecuted_blocks=1 00:12:07.759 00:12:07.759 ' 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:07.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:07.759 --rc genhtml_branch_coverage=1 00:12:07.759 --rc genhtml_function_coverage=1 00:12:07.759 --rc genhtml_legend=1 00:12:07.759 --rc geninfo_all_blocks=1 00:12:07.759 --rc geninfo_unexecuted_blocks=1 00:12:07.759 00:12:07.759 ' 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.759 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # : 0 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:07.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:12:07.760 05:12:21 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # e810=() 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # x722=() 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # mlx=() 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:14.339 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:14.339 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:14.339 Found net devices under 0000:af:00.0: cvl_0_0 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:14.339 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:14.340 Found net devices under 0000:af:00.1: cvl_0_1 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.340 05:12:26 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:14.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.264 ms 00:12:14.340 00:12:14.340 --- 10.0.0.2 ping statistics --- 00:12:14.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.340 rtt min/avg/max/mdev = 0.264/0.264/0.264/0.000 ms 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.166 ms 00:12:14.340 00:12:14.340 --- 10.0.0.1 ping statistics --- 00:12:14.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.340 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # return 0 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@509 -- # nvmfpid=213443 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@510 -- # waitforlisten 213443 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 213443 ']' 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.340 [2024-12-15 05:12:27.206128] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:14.340 [2024-12-15 05:12:27.206167] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.340 [2024-12-15 05:12:27.264402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.340 [2024-12-15 05:12:27.287780] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.340 [2024-12-15 05:12:27.287817] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.340 [2024-12-15 05:12:27.287824] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.340 [2024-12-15 05:12:27.287830] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.340 [2024-12-15 05:12:27.287836] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.340 [2024-12-15 05:12:27.289289] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.340 [2024-12-15 05:12:27.289331] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.340 [2024-12-15 05:12:27.289437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:14.340 [2024-12-15 05:12:27.289438] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.340 [2024-12-15 05:12:27.433764] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.340 Null1 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.340 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 [2024-12-15 05:12:27.510191] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 Null2 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 Null3 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 Null4 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:14.341 00:12:14.341 Discovery Log Number of Records 6, Generation counter 6 00:12:14.341 =====Discovery Log Entry 0====== 00:12:14.341 trtype: tcp 00:12:14.341 adrfam: ipv4 00:12:14.341 subtype: current discovery subsystem 00:12:14.341 treq: not required 00:12:14.341 portid: 0 00:12:14.341 trsvcid: 4420 00:12:14.341 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:14.341 traddr: 10.0.0.2 00:12:14.341 eflags: explicit discovery connections, duplicate discovery information 00:12:14.341 sectype: none 00:12:14.341 =====Discovery Log Entry 1====== 00:12:14.341 trtype: tcp 00:12:14.341 adrfam: ipv4 00:12:14.341 subtype: nvme subsystem 00:12:14.341 treq: not required 00:12:14.341 portid: 0 00:12:14.341 trsvcid: 4420 00:12:14.341 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:14.341 traddr: 10.0.0.2 00:12:14.341 eflags: none 00:12:14.341 sectype: none 00:12:14.341 =====Discovery Log Entry 2====== 00:12:14.341 trtype: tcp 00:12:14.341 adrfam: ipv4 00:12:14.341 subtype: nvme subsystem 00:12:14.341 treq: not required 00:12:14.341 portid: 0 00:12:14.341 trsvcid: 4420 00:12:14.341 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:14.341 traddr: 10.0.0.2 00:12:14.341 eflags: none 00:12:14.341 sectype: none 00:12:14.341 =====Discovery Log Entry 3====== 00:12:14.341 trtype: tcp 00:12:14.341 adrfam: ipv4 00:12:14.341 subtype: nvme subsystem 00:12:14.341 treq: not required 00:12:14.341 portid: 0 00:12:14.341 trsvcid: 4420 00:12:14.341 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:14.341 traddr: 10.0.0.2 00:12:14.341 eflags: none 00:12:14.341 sectype: none 00:12:14.341 =====Discovery Log Entry 4====== 00:12:14.341 trtype: tcp 00:12:14.341 adrfam: ipv4 00:12:14.341 subtype: nvme subsystem 00:12:14.341 treq: not required 00:12:14.341 portid: 0 00:12:14.341 trsvcid: 4420 00:12:14.341 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:14.341 traddr: 10.0.0.2 00:12:14.341 eflags: none 00:12:14.341 sectype: none 00:12:14.341 =====Discovery Log Entry 5====== 00:12:14.341 trtype: tcp 00:12:14.341 adrfam: ipv4 00:12:14.341 subtype: discovery subsystem referral 00:12:14.341 treq: not required 00:12:14.341 portid: 0 00:12:14.341 trsvcid: 4430 00:12:14.341 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:14.341 traddr: 10.0.0.2 00:12:14.341 eflags: none 00:12:14.341 sectype: none 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:14.341 Perform nvmf subsystem discovery via RPC 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.341 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.341 [ 00:12:14.341 { 00:12:14.341 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:14.341 "subtype": "Discovery", 00:12:14.341 "listen_addresses": [ 00:12:14.341 { 00:12:14.341 "trtype": "TCP", 00:12:14.341 "adrfam": "IPv4", 00:12:14.341 "traddr": "10.0.0.2", 00:12:14.341 "trsvcid": "4420" 00:12:14.341 } 00:12:14.341 ], 00:12:14.341 "allow_any_host": true, 00:12:14.341 "hosts": [] 00:12:14.341 }, 00:12:14.341 { 00:12:14.341 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:14.341 "subtype": "NVMe", 00:12:14.341 "listen_addresses": [ 00:12:14.341 { 00:12:14.341 "trtype": "TCP", 00:12:14.341 "adrfam": "IPv4", 00:12:14.341 "traddr": "10.0.0.2", 00:12:14.341 "trsvcid": "4420" 00:12:14.341 } 00:12:14.341 ], 00:12:14.342 "allow_any_host": true, 00:12:14.342 "hosts": [], 00:12:14.342 "serial_number": "SPDK00000000000001", 00:12:14.342 "model_number": "SPDK bdev Controller", 00:12:14.342 "max_namespaces": 32, 00:12:14.342 "min_cntlid": 1, 00:12:14.342 "max_cntlid": 65519, 00:12:14.342 "namespaces": [ 00:12:14.342 { 00:12:14.342 "nsid": 1, 00:12:14.342 "bdev_name": "Null1", 00:12:14.342 "name": "Null1", 00:12:14.342 "nguid": "5AE03237358041CEA1E24E8CF59A26A1", 00:12:14.342 "uuid": "5ae03237-3580-41ce-a1e2-4e8cf59a26a1" 00:12:14.342 } 00:12:14.342 ] 00:12:14.342 }, 00:12:14.342 { 00:12:14.342 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:14.342 "subtype": "NVMe", 00:12:14.342 "listen_addresses": [ 00:12:14.342 { 00:12:14.342 "trtype": "TCP", 00:12:14.342 "adrfam": "IPv4", 00:12:14.342 "traddr": "10.0.0.2", 00:12:14.342 "trsvcid": "4420" 00:12:14.342 } 00:12:14.342 ], 00:12:14.342 "allow_any_host": true, 00:12:14.342 "hosts": [], 00:12:14.342 "serial_number": "SPDK00000000000002", 00:12:14.342 "model_number": "SPDK bdev Controller", 00:12:14.342 "max_namespaces": 32, 00:12:14.342 "min_cntlid": 1, 00:12:14.342 "max_cntlid": 65519, 00:12:14.342 "namespaces": [ 00:12:14.342 { 00:12:14.342 "nsid": 1, 00:12:14.342 "bdev_name": "Null2", 00:12:14.342 "name": "Null2", 00:12:14.342 "nguid": "B7B85187BD3A4EE4A3DDD1AB2BA6D0D7", 00:12:14.342 "uuid": "b7b85187-bd3a-4ee4-a3dd-d1ab2ba6d0d7" 00:12:14.342 } 00:12:14.342 ] 00:12:14.342 }, 00:12:14.342 { 00:12:14.342 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:14.342 "subtype": "NVMe", 00:12:14.342 "listen_addresses": [ 00:12:14.342 { 00:12:14.342 "trtype": "TCP", 00:12:14.342 "adrfam": "IPv4", 00:12:14.342 "traddr": "10.0.0.2", 00:12:14.342 "trsvcid": "4420" 00:12:14.342 } 00:12:14.342 ], 00:12:14.342 "allow_any_host": true, 00:12:14.342 "hosts": [], 00:12:14.342 "serial_number": "SPDK00000000000003", 00:12:14.342 "model_number": "SPDK bdev Controller", 00:12:14.342 "max_namespaces": 32, 00:12:14.342 "min_cntlid": 1, 00:12:14.342 "max_cntlid": 65519, 00:12:14.342 "namespaces": [ 00:12:14.342 { 00:12:14.342 "nsid": 1, 00:12:14.342 "bdev_name": "Null3", 00:12:14.342 "name": "Null3", 00:12:14.342 "nguid": "57A52DAF00C04907B4A3253EF80C3400", 00:12:14.342 "uuid": "57a52daf-00c0-4907-b4a3-253ef80c3400" 00:12:14.342 } 00:12:14.342 ] 00:12:14.342 }, 00:12:14.342 { 00:12:14.342 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:14.342 "subtype": "NVMe", 00:12:14.342 "listen_addresses": [ 00:12:14.342 { 00:12:14.342 "trtype": "TCP", 00:12:14.342 "adrfam": "IPv4", 00:12:14.342 "traddr": "10.0.0.2", 00:12:14.342 "trsvcid": "4420" 00:12:14.342 } 00:12:14.342 ], 00:12:14.342 "allow_any_host": true, 00:12:14.342 "hosts": [], 00:12:14.342 "serial_number": "SPDK00000000000004", 00:12:14.342 "model_number": "SPDK bdev Controller", 00:12:14.342 "max_namespaces": 32, 00:12:14.342 "min_cntlid": 1, 00:12:14.342 "max_cntlid": 65519, 00:12:14.342 "namespaces": [ 00:12:14.342 { 00:12:14.342 "nsid": 1, 00:12:14.342 "bdev_name": "Null4", 00:12:14.342 "name": "Null4", 00:12:14.342 "nguid": "3BBEB9986B4B4D2F9EB72EA7837330AB", 00:12:14.342 "uuid": "3bbeb998-6b4b-4d2f-9eb7-2ea7837330ab" 00:12:14.342 } 00:12:14.342 ] 00:12:14.342 } 00:12:14.342 ] 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # sync 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set +e 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:14.342 rmmod nvme_tcp 00:12:14.342 rmmod nvme_fabrics 00:12:14.342 rmmod nvme_keyring 00:12:14.342 05:12:27 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:14.342 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@128 -- # set -e 00:12:14.342 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # return 0 00:12:14.342 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@517 -- # '[' -n 213443 ']' 00:12:14.342 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@518 -- # killprocess 213443 00:12:14.342 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 213443 ']' 00:12:14.343 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 213443 00:12:14.343 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:12:14.343 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:14.343 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 213443 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 213443' 00:12:14.603 killing process with pid 213443 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 213443 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 213443 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # iptr 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-save 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:14.603 05:12:28 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:17.142 00:12:17.142 real 0m9.300s 00:12:17.142 user 0m5.584s 00:12:17.142 sys 0m4.811s 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:17.142 ************************************ 00:12:17.142 END TEST nvmf_target_discovery 00:12:17.142 ************************************ 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:17.142 ************************************ 00:12:17.142 START TEST nvmf_referrals 00:12:17.142 ************************************ 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:17.142 * Looking for test storage... 00:12:17.142 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lcov --version 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:17.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.142 --rc genhtml_branch_coverage=1 00:12:17.142 --rc genhtml_function_coverage=1 00:12:17.142 --rc genhtml_legend=1 00:12:17.142 --rc geninfo_all_blocks=1 00:12:17.142 --rc geninfo_unexecuted_blocks=1 00:12:17.142 00:12:17.142 ' 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:17.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.142 --rc genhtml_branch_coverage=1 00:12:17.142 --rc genhtml_function_coverage=1 00:12:17.142 --rc genhtml_legend=1 00:12:17.142 --rc geninfo_all_blocks=1 00:12:17.142 --rc geninfo_unexecuted_blocks=1 00:12:17.142 00:12:17.142 ' 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:17.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.142 --rc genhtml_branch_coverage=1 00:12:17.142 --rc genhtml_function_coverage=1 00:12:17.142 --rc genhtml_legend=1 00:12:17.142 --rc geninfo_all_blocks=1 00:12:17.142 --rc geninfo_unexecuted_blocks=1 00:12:17.142 00:12:17.142 ' 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:17.142 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:17.142 --rc genhtml_branch_coverage=1 00:12:17.142 --rc genhtml_function_coverage=1 00:12:17.142 --rc genhtml_legend=1 00:12:17.142 --rc geninfo_all_blocks=1 00:12:17.142 --rc geninfo_unexecuted_blocks=1 00:12:17.142 00:12:17.142 ' 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:17.142 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # : 0 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:17.143 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # xtrace_disable 00:12:17.143 05:12:30 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # pci_devs=() 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # net_devs=() 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # e810=() 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # local -ga e810 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # x722=() 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # local -ga x722 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # mlx=() 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@322 -- # local -ga mlx 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:23.720 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:23.720 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:23.720 Found net devices under 0000:af:00.0: cvl_0_0 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:23.720 Found net devices under 0000:af:00.1: cvl_0_1 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@442 -- # is_hw=yes 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:23.720 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:23.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:23.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:12:23.721 00:12:23.721 --- 10.0.0.2 ping statistics --- 00:12:23.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.721 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:23.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:23.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:12:23.721 00:12:23.721 --- 10.0.0.1 ping statistics --- 00:12:23.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:23.721 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # return 0 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@509 -- # nvmfpid=217063 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@510 -- # waitforlisten 217063 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 217063 ']' 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:23.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.721 [2024-12-15 05:12:36.539317] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:23.721 [2024-12-15 05:12:36.539365] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:23.721 [2024-12-15 05:12:36.617744] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:23.721 [2024-12-15 05:12:36.641040] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:23.721 [2024-12-15 05:12:36.641078] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:23.721 [2024-12-15 05:12:36.641085] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:23.721 [2024-12-15 05:12:36.641091] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:23.721 [2024-12-15 05:12:36.641096] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:23.721 [2024-12-15 05:12:36.642540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:23.721 [2024-12-15 05:12:36.642649] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:23.721 [2024-12-15 05:12:36.642758] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:23.721 [2024-12-15 05:12:36.642759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.721 [2024-12-15 05:12:36.779449] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.721 [2024-12-15 05:12:36.814143] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:23.721 05:12:36 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:23.721 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:23.722 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.981 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:23.982 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:24.241 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:24.241 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:24.241 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:24.241 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:24.241 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:24.241 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:24.241 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:24.241 05:12:37 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:24.501 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:24.761 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:24.761 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:24.761 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:24.761 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:24.761 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:24.761 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:24.761 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:24.761 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:25.020 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@516 -- # nvmfcleanup 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # sync 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set +e 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # for i in {1..20} 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:12:25.279 rmmod nvme_tcp 00:12:25.279 rmmod nvme_fabrics 00:12:25.279 rmmod nvme_keyring 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@128 -- # set -e 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # return 0 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@517 -- # '[' -n 217063 ']' 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@518 -- # killprocess 217063 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 217063 ']' 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 217063 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:25.279 05:12:38 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 217063 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 217063' 00:12:25.538 killing process with pid 217063 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 217063 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 217063 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # iptr 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-save 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@791 -- # iptables-restore 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # remove_spdk_ns 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:25.538 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:25.539 05:12:39 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:12:28.077 00:12:28.077 real 0m10.863s 00:12:28.077 user 0m12.575s 00:12:28.077 sys 0m5.215s 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:28.077 ************************************ 00:12:28.077 END TEST nvmf_referrals 00:12:28.077 ************************************ 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:28.077 ************************************ 00:12:28.077 START TEST nvmf_connect_disconnect 00:12:28.077 ************************************ 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:28.077 * Looking for test storage... 00:12:28.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:12:28.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.077 --rc genhtml_branch_coverage=1 00:12:28.077 --rc genhtml_function_coverage=1 00:12:28.077 --rc genhtml_legend=1 00:12:28.077 --rc geninfo_all_blocks=1 00:12:28.077 --rc geninfo_unexecuted_blocks=1 00:12:28.077 00:12:28.077 ' 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:12:28.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.077 --rc genhtml_branch_coverage=1 00:12:28.077 --rc genhtml_function_coverage=1 00:12:28.077 --rc genhtml_legend=1 00:12:28.077 --rc geninfo_all_blocks=1 00:12:28.077 --rc geninfo_unexecuted_blocks=1 00:12:28.077 00:12:28.077 ' 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:12:28.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.077 --rc genhtml_branch_coverage=1 00:12:28.077 --rc genhtml_function_coverage=1 00:12:28.077 --rc genhtml_legend=1 00:12:28.077 --rc geninfo_all_blocks=1 00:12:28.077 --rc geninfo_unexecuted_blocks=1 00:12:28.077 00:12:28.077 ' 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:12:28.077 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:28.077 --rc genhtml_branch_coverage=1 00:12:28.077 --rc genhtml_function_coverage=1 00:12:28.077 --rc genhtml_legend=1 00:12:28.077 --rc geninfo_all_blocks=1 00:12:28.077 --rc geninfo_unexecuted_blocks=1 00:12:28.077 00:12:28.077 ' 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:28.077 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # : 0 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:12:28.078 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:12:28.078 05:12:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # e810=() 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # x722=() 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:34.657 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:12:34.658 Found 0000:af:00.0 (0x8086 - 0x159b) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:12:34.658 Found 0000:af:00.1 (0x8086 - 0x159b) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:12:34.658 Found net devices under 0000:af:00.0: cvl_0_0 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:12:34.658 Found net devices under 0000:af:00.1: cvl_0_1 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:12:34.658 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:34.658 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.344 ms 00:12:34.658 00:12:34.658 --- 10.0.0.2 ping statistics --- 00:12:34.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.658 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:34.658 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:34.658 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:12:34.658 00:12:34.658 --- 10.0.0.1 ping statistics --- 00:12:34.658 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:34.658 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # return 0 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@509 -- # nvmfpid=221033 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@510 -- # waitforlisten 221033 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 221033 ']' 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:34.658 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:34.659 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.659 [2024-12-15 05:12:47.500348] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:12:34.659 [2024-12-15 05:12:47.500403] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:34.659 [2024-12-15 05:12:47.577563] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:34.659 [2024-12-15 05:12:47.600942] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:34.659 [2024-12-15 05:12:47.600984] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:34.659 [2024-12-15 05:12:47.600997] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:34.659 [2024-12-15 05:12:47.601004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:34.659 [2024-12-15 05:12:47.601010] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:34.659 [2024-12-15 05:12:47.602443] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:12:34.659 [2024-12-15 05:12:47.602553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:12:34.659 [2024-12-15 05:12:47.602651] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:34.659 [2024-12-15 05:12:47.602651] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.659 [2024-12-15 05:12:47.734435] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:34.659 [2024-12-15 05:12:47.793451] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 1 -eq 1 ']' 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@27 -- # num_iterations=100 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@29 -- # NVME_CONNECT='nvme connect -i 8' 00:12:34.659 05:12:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:36.566 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:39.103 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.011 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:43.547 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:46.085 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.992 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:50.530 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:52.437 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:54.974 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:57.512 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:59.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:01.960 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:04.494 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.399 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:08.935 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:11.470 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:13.375 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.909 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:18.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:20.458 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.995 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:24.901 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:27.436 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:29.971 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:31.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:34.410 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:38.851 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:41.387 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.918 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:45.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:48.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:50.267 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:52.802 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:55.340 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:57.247 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:59.782 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:02.319 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:04.224 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:06.761 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:09.298 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:11.204 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:13.740 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:15.647 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:18.185 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:20.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:25.166 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:27.702 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.607 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:32.145 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:34.682 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:36.588 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:39.125 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:41.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:43.570 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:46.108 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:48.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:50.551 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:53.086 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:55.620 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:58.155 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:00.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:02.593 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:05.127 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:07.032 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:09.565 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:12.099 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:14.005 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:16.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:18.444 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:20.979 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:23.515 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:25.420 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:27.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:30.492 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:32.396 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:34.931 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:37.464 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:39.370 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:41.917 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:44.453 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:46.356 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:48.888 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:50.794 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:53.328 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:55.864 [2024-12-15 05:16:09.029968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232e360 is same with the state(6) to be set 00:15:55.864 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:15:57.770 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:00.305 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:02.210 [2024-12-15 05:16:15.825917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232e500 is same with the state(6) to be set 00:16:02.210 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:04.745 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:06.651 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:09.187 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:11.723 [2024-12-15 05:16:24.961019] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x232e500 is same with the state(6) to be set 00:16:11.723 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:13.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:16.165 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:18.700 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:20.606 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:23.146 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.683 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # sync 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set +e 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:25.683 rmmod nvme_tcp 00:16:25.683 rmmod nvme_fabrics 00:16:25.683 rmmod nvme_keyring 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@128 -- # set -e 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # return 0 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@517 -- # '[' -n 221033 ']' 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@518 -- # killprocess 221033 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 221033 ']' 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 221033 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 221033 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 221033' 00:16:25.683 killing process with pid 221033 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 221033 00:16:25.683 05:16:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 221033 00:16:25.683 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:25.683 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:25.683 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:25.683 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # iptr 00:16:25.683 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:16:25.683 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:25.683 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:16:25.683 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:25.683 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:25.683 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:25.683 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:25.683 05:16:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.592 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:27.592 00:16:27.592 real 3m59.870s 00:16:27.592 user 15m16.670s 00:16:27.592 sys 0m24.535s 00:16:27.592 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:27.592 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:16:27.592 ************************************ 00:16:27.592 END TEST nvmf_connect_disconnect 00:16:27.592 ************************************ 00:16:27.592 05:16:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:27.592 05:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:27.592 05:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:27.592 05:16:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:27.592 ************************************ 00:16:27.592 START TEST nvmf_multitarget 00:16:27.592 ************************************ 00:16:27.592 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:16:27.852 * Looking for test storage... 00:16:27.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lcov --version 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:27.852 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:27.852 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.852 --rc genhtml_branch_coverage=1 00:16:27.852 --rc genhtml_function_coverage=1 00:16:27.852 --rc genhtml_legend=1 00:16:27.852 --rc geninfo_all_blocks=1 00:16:27.853 --rc geninfo_unexecuted_blocks=1 00:16:27.853 00:16:27.853 ' 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:27.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.853 --rc genhtml_branch_coverage=1 00:16:27.853 --rc genhtml_function_coverage=1 00:16:27.853 --rc genhtml_legend=1 00:16:27.853 --rc geninfo_all_blocks=1 00:16:27.853 --rc geninfo_unexecuted_blocks=1 00:16:27.853 00:16:27.853 ' 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:27.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.853 --rc genhtml_branch_coverage=1 00:16:27.853 --rc genhtml_function_coverage=1 00:16:27.853 --rc genhtml_legend=1 00:16:27.853 --rc geninfo_all_blocks=1 00:16:27.853 --rc geninfo_unexecuted_blocks=1 00:16:27.853 00:16:27.853 ' 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:27.853 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:27.853 --rc genhtml_branch_coverage=1 00:16:27.853 --rc genhtml_function_coverage=1 00:16:27.853 --rc genhtml_legend=1 00:16:27.853 --rc geninfo_all_blocks=1 00:16:27.853 --rc geninfo_unexecuted_blocks=1 00:16:27.853 00:16:27.853 ' 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # : 0 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:27.853 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # xtrace_disable 00:16:27.853 05:16:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # pci_devs=() 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # net_devs=() 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # e810=() 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # local -ga e810 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # x722=() 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # local -ga x722 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # mlx=() 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@322 -- # local -ga mlx 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:34.427 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:34.428 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:34.428 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:34.428 Found net devices under 0000:af:00.0: cvl_0_0 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:34.428 Found net devices under 0000:af:00.1: cvl_0_1 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@442 -- # is_hw=yes 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:34.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:34.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.351 ms 00:16:34.428 00:16:34.428 --- 10.0.0.2 ping statistics --- 00:16:34.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.428 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:34.428 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:34.428 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.211 ms 00:16:34.428 00:16:34.428 --- 10.0.0.1 ping statistics --- 00:16:34.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:34.428 rtt min/avg/max/mdev = 0.211/0.211/0.211/0.000 ms 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # return 0 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@509 -- # nvmfpid=264008 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@510 -- # waitforlisten 264008 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 264008 ']' 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:34.428 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:34.428 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.429 [2024-12-15 05:16:47.492815] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:34.429 [2024-12-15 05:16:47.492856] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:34.429 [2024-12-15 05:16:47.573088] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:34.429 [2024-12-15 05:16:47.595410] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:34.429 [2024-12-15 05:16:47.595449] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:34.429 [2024-12-15 05:16:47.595456] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:34.429 [2024-12-15 05:16:47.595462] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:34.429 [2024-12-15 05:16:47.595468] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:34.429 [2024-12-15 05:16:47.596881] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:34.429 [2024-12-15 05:16:47.596989] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:34.429 [2024-12-15 05:16:47.597080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:34.429 [2024-12-15 05:16:47.597081] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.429 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:34.429 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:16:34.429 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:34.429 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:34.429 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:34.429 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:34.429 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:16:34.429 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:34.429 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:16:34.429 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:16:34.429 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:16:34.429 "nvmf_tgt_1" 00:16:34.429 05:16:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:16:34.429 "nvmf_tgt_2" 00:16:34.429 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:34.429 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:16:34.687 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:16:34.687 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:16:34.687 true 00:16:34.687 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:16:34.687 true 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@516 -- # nvmfcleanup 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # sync 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set +e 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # for i in {1..20} 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:16:34.946 rmmod nvme_tcp 00:16:34.946 rmmod nvme_fabrics 00:16:34.946 rmmod nvme_keyring 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@128 -- # set -e 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # return 0 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@517 -- # '[' -n 264008 ']' 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@518 -- # killprocess 264008 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 264008 ']' 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 264008 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 264008 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:34.946 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:34.947 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 264008' 00:16:34.947 killing process with pid 264008 00:16:34.947 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 264008 00:16:34.947 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 264008 00:16:35.206 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:16:35.206 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:16:35.206 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:16:35.206 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # iptr 00:16:35.206 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-save 00:16:35.206 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:16:35.206 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@791 -- # iptables-restore 00:16:35.206 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:16:35.206 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # remove_spdk_ns 00:16:35.206 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:35.206 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:35.206 05:16:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.745 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:16:37.745 00:16:37.745 real 0m9.572s 00:16:37.745 user 0m7.107s 00:16:37.745 sys 0m4.839s 00:16:37.745 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:37.745 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:16:37.745 ************************************ 00:16:37.745 END TEST nvmf_multitarget 00:16:37.745 ************************************ 00:16:37.745 05:16:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:37.745 05:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:16:37.745 05:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:37.745 05:16:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:37.745 ************************************ 00:16:37.745 START TEST nvmf_rpc 00:16:37.745 ************************************ 00:16:37.745 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:16:37.745 * Looking for test storage... 00:16:37.745 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:37.745 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:37.745 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:16:37.745 05:16:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:37.745 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:37.745 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:37.745 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:37.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.746 --rc genhtml_branch_coverage=1 00:16:37.746 --rc genhtml_function_coverage=1 00:16:37.746 --rc genhtml_legend=1 00:16:37.746 --rc geninfo_all_blocks=1 00:16:37.746 --rc geninfo_unexecuted_blocks=1 00:16:37.746 00:16:37.746 ' 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:37.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.746 --rc genhtml_branch_coverage=1 00:16:37.746 --rc genhtml_function_coverage=1 00:16:37.746 --rc genhtml_legend=1 00:16:37.746 --rc geninfo_all_blocks=1 00:16:37.746 --rc geninfo_unexecuted_blocks=1 00:16:37.746 00:16:37.746 ' 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:37.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.746 --rc genhtml_branch_coverage=1 00:16:37.746 --rc genhtml_function_coverage=1 00:16:37.746 --rc genhtml_legend=1 00:16:37.746 --rc geninfo_all_blocks=1 00:16:37.746 --rc geninfo_unexecuted_blocks=1 00:16:37.746 00:16:37.746 ' 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:37.746 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:37.746 --rc genhtml_branch_coverage=1 00:16:37.746 --rc genhtml_function_coverage=1 00:16:37.746 --rc genhtml_legend=1 00:16:37.746 --rc geninfo_all_blocks=1 00:16:37.746 --rc geninfo_unexecuted_blocks=1 00:16:37.746 00:16:37.746 ' 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # : 0 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:16:37.746 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@476 -- # prepare_net_devs 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:37.746 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:37.747 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:16:37.747 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:16:37.747 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # xtrace_disable 00:16:37.747 05:16:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # pci_devs=() 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # local -a pci_devs 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # pci_drivers=() 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # net_devs=() 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@319 -- # local -ga net_devs 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # e810=() 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # local -ga e810 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # x722=() 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # local -ga x722 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # mlx=() 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@322 -- # local -ga mlx 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:16:44.324 Found 0000:af:00.0 (0x8086 - 0x159b) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:16:44.324 Found 0000:af:00.1 (0x8086 - 0x159b) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:44.324 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:16:44.325 Found net devices under 0000:af:00.0: cvl_0_0 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:16:44.325 Found net devices under 0000:af:00.1: cvl_0_1 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@442 -- # is_hw=yes 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:16:44.325 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:44.325 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.233 ms 00:16:44.325 00:16:44.325 --- 10.0.0.2 ping statistics --- 00:16:44.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.325 rtt min/avg/max/mdev = 0.233/0.233/0.233/0.000 ms 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:44.325 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:44.325 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.236 ms 00:16:44.325 00:16:44.325 --- 10.0.0.1 ping statistics --- 00:16:44.325 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:44.325 rtt min/avg/max/mdev = 0.236/0.236/0.236/0.000 ms 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # return 0 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:16:44.325 05:16:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@509 -- # nvmfpid=267722 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@510 -- # waitforlisten 267722 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 267722 ']' 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:44.325 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.325 [2024-12-15 05:16:57.074515] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:16:44.325 [2024-12-15 05:16:57.074556] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:16:44.325 [2024-12-15 05:16:57.148802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:16:44.325 [2024-12-15 05:16:57.171246] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:16:44.325 [2024-12-15 05:16:57.171285] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:16:44.325 [2024-12-15 05:16:57.171292] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:16:44.325 [2024-12-15 05:16:57.171298] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:16:44.325 [2024-12-15 05:16:57.171303] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:16:44.325 [2024-12-15 05:16:57.172637] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:44.325 [2024-12-15 05:16:57.172747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:16:44.325 [2024-12-15 05:16:57.172853] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.325 [2024-12-15 05:16:57.172855] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.325 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:16:44.325 "tick_rate": 2100000000, 00:16:44.325 "poll_groups": [ 00:16:44.325 { 00:16:44.325 "name": "nvmf_tgt_poll_group_000", 00:16:44.325 "admin_qpairs": 0, 00:16:44.325 "io_qpairs": 0, 00:16:44.325 "current_admin_qpairs": 0, 00:16:44.325 "current_io_qpairs": 0, 00:16:44.325 "pending_bdev_io": 0, 00:16:44.325 "completed_nvme_io": 0, 00:16:44.325 "transports": [] 00:16:44.325 }, 00:16:44.325 { 00:16:44.325 "name": "nvmf_tgt_poll_group_001", 00:16:44.325 "admin_qpairs": 0, 00:16:44.325 "io_qpairs": 0, 00:16:44.325 "current_admin_qpairs": 0, 00:16:44.325 "current_io_qpairs": 0, 00:16:44.325 "pending_bdev_io": 0, 00:16:44.325 "completed_nvme_io": 0, 00:16:44.325 "transports": [] 00:16:44.325 }, 00:16:44.325 { 00:16:44.325 "name": "nvmf_tgt_poll_group_002", 00:16:44.325 "admin_qpairs": 0, 00:16:44.325 "io_qpairs": 0, 00:16:44.325 "current_admin_qpairs": 0, 00:16:44.325 "current_io_qpairs": 0, 00:16:44.325 "pending_bdev_io": 0, 00:16:44.325 "completed_nvme_io": 0, 00:16:44.325 "transports": [] 00:16:44.325 }, 00:16:44.325 { 00:16:44.325 "name": "nvmf_tgt_poll_group_003", 00:16:44.325 "admin_qpairs": 0, 00:16:44.325 "io_qpairs": 0, 00:16:44.325 "current_admin_qpairs": 0, 00:16:44.325 "current_io_qpairs": 0, 00:16:44.325 "pending_bdev_io": 0, 00:16:44.325 "completed_nvme_io": 0, 00:16:44.326 "transports": [] 00:16:44.326 } 00:16:44.326 ] 00:16:44.326 }' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.326 [2024-12-15 05:16:57.413632] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:16:44.326 "tick_rate": 2100000000, 00:16:44.326 "poll_groups": [ 00:16:44.326 { 00:16:44.326 "name": "nvmf_tgt_poll_group_000", 00:16:44.326 "admin_qpairs": 0, 00:16:44.326 "io_qpairs": 0, 00:16:44.326 "current_admin_qpairs": 0, 00:16:44.326 "current_io_qpairs": 0, 00:16:44.326 "pending_bdev_io": 0, 00:16:44.326 "completed_nvme_io": 0, 00:16:44.326 "transports": [ 00:16:44.326 { 00:16:44.326 "trtype": "TCP" 00:16:44.326 } 00:16:44.326 ] 00:16:44.326 }, 00:16:44.326 { 00:16:44.326 "name": "nvmf_tgt_poll_group_001", 00:16:44.326 "admin_qpairs": 0, 00:16:44.326 "io_qpairs": 0, 00:16:44.326 "current_admin_qpairs": 0, 00:16:44.326 "current_io_qpairs": 0, 00:16:44.326 "pending_bdev_io": 0, 00:16:44.326 "completed_nvme_io": 0, 00:16:44.326 "transports": [ 00:16:44.326 { 00:16:44.326 "trtype": "TCP" 00:16:44.326 } 00:16:44.326 ] 00:16:44.326 }, 00:16:44.326 { 00:16:44.326 "name": "nvmf_tgt_poll_group_002", 00:16:44.326 "admin_qpairs": 0, 00:16:44.326 "io_qpairs": 0, 00:16:44.326 "current_admin_qpairs": 0, 00:16:44.326 "current_io_qpairs": 0, 00:16:44.326 "pending_bdev_io": 0, 00:16:44.326 "completed_nvme_io": 0, 00:16:44.326 "transports": [ 00:16:44.326 { 00:16:44.326 "trtype": "TCP" 00:16:44.326 } 00:16:44.326 ] 00:16:44.326 }, 00:16:44.326 { 00:16:44.326 "name": "nvmf_tgt_poll_group_003", 00:16:44.326 "admin_qpairs": 0, 00:16:44.326 "io_qpairs": 0, 00:16:44.326 "current_admin_qpairs": 0, 00:16:44.326 "current_io_qpairs": 0, 00:16:44.326 "pending_bdev_io": 0, 00:16:44.326 "completed_nvme_io": 0, 00:16:44.326 "transports": [ 00:16:44.326 { 00:16:44.326 "trtype": "TCP" 00:16:44.326 } 00:16:44.326 ] 00:16:44.326 } 00:16:44.326 ] 00:16:44.326 }' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.326 Malloc1 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.326 [2024-12-15 05:16:57.593507] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:16:44.326 [2024-12-15 05:16:57.622104] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:44.326 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:44.326 could not add new controller: failed to write to nvme-fabrics device 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.326 05:16:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:45.264 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:16:45.264 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:45.264 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:45.264 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:45.264 05:16:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:47.169 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:47.169 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:47.169 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:47.428 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:47.428 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:47.428 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:47.428 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:47.428 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:47.428 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:47.428 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:47.429 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:47.429 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:47.429 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:47.429 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:47.429 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:47.429 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:16:47.429 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.429 05:17:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:47.429 [2024-12-15 05:17:01.027949] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562' 00:16:47.429 Failed to write to /dev/nvme-fabrics: Input/output error 00:16:47.429 could not add new controller: failed to write to nvme-fabrics device 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.429 05:17:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:48.805 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:16:48.805 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:48.805 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:48.805 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:48.805 05:17:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:50.710 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.710 [2024-12-15 05:17:04.388995] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.710 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.969 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.969 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:50.969 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.969 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:50.969 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.969 05:17:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:51.903 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:51.903 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:51.903 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:51.903 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:51.903 05:17:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:53.914 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:53.915 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:53.915 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:53.915 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:53.915 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:53.915 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:53.915 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:53.915 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:53.915 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:53.915 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:53.915 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:53.915 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:53.915 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.174 [2024-12-15 05:17:07.647630] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.174 05:17:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:55.552 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:55.552 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:55.552 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:55.552 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:55.552 05:17:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:16:57.458 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:16:57.458 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:16:57.459 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.459 05:17:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.459 [2024-12-15 05:17:11.006174] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.459 05:17:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:16:58.839 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:16:58.839 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:16:58.839 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:16:58.839 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:16:58.839 05:17:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:00.748 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:00.749 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.749 [2024-12-15 05:17:14.364545] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.749 05:17:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:02.128 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:02.128 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:02.128 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:02.128 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:02.128 05:17:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:04.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.035 [2024-12-15 05:17:17.621122] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.035 05:17:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:17:05.415 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:17:05.415 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:17:05.415 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:05.415 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:05.415 05:17:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:17:07.321 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.321 [2024-12-15 05:17:20.982026] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.321 05:17:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.321 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.321 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.321 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.321 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.581 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.581 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.581 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.581 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.581 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.581 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:07.581 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:07.581 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.581 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.581 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.581 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.581 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 [2024-12-15 05:17:21.034107] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 [2024-12-15 05:17:21.082267] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 [2024-12-15 05:17:21.130419] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 [2024-12-15 05:17:21.182610] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.582 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:17:07.582 "tick_rate": 2100000000, 00:17:07.582 "poll_groups": [ 00:17:07.582 { 00:17:07.582 "name": "nvmf_tgt_poll_group_000", 00:17:07.582 "admin_qpairs": 2, 00:17:07.582 "io_qpairs": 168, 00:17:07.582 "current_admin_qpairs": 0, 00:17:07.582 "current_io_qpairs": 0, 00:17:07.582 "pending_bdev_io": 0, 00:17:07.582 "completed_nvme_io": 219, 00:17:07.582 "transports": [ 00:17:07.582 { 00:17:07.582 "trtype": "TCP" 00:17:07.582 } 00:17:07.582 ] 00:17:07.582 }, 00:17:07.582 { 00:17:07.582 "name": "nvmf_tgt_poll_group_001", 00:17:07.582 "admin_qpairs": 2, 00:17:07.582 "io_qpairs": 168, 00:17:07.582 "current_admin_qpairs": 0, 00:17:07.582 "current_io_qpairs": 0, 00:17:07.582 "pending_bdev_io": 0, 00:17:07.582 "completed_nvme_io": 220, 00:17:07.582 "transports": [ 00:17:07.582 { 00:17:07.582 "trtype": "TCP" 00:17:07.582 } 00:17:07.583 ] 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "name": "nvmf_tgt_poll_group_002", 00:17:07.583 "admin_qpairs": 1, 00:17:07.583 "io_qpairs": 168, 00:17:07.583 "current_admin_qpairs": 0, 00:17:07.583 "current_io_qpairs": 0, 00:17:07.583 "pending_bdev_io": 0, 00:17:07.583 "completed_nvme_io": 267, 00:17:07.583 "transports": [ 00:17:07.583 { 00:17:07.583 "trtype": "TCP" 00:17:07.583 } 00:17:07.583 ] 00:17:07.583 }, 00:17:07.583 { 00:17:07.583 "name": "nvmf_tgt_poll_group_003", 00:17:07.583 "admin_qpairs": 2, 00:17:07.583 "io_qpairs": 168, 00:17:07.583 "current_admin_qpairs": 0, 00:17:07.583 "current_io_qpairs": 0, 00:17:07.583 "pending_bdev_io": 0, 00:17:07.583 "completed_nvme_io": 316, 00:17:07.583 "transports": [ 00:17:07.583 { 00:17:07.583 "trtype": "TCP" 00:17:07.583 } 00:17:07.583 ] 00:17:07.583 } 00:17:07.583 ] 00:17:07.583 }' 00:17:07.583 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:17:07.583 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:17:07.583 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:17:07.583 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:07.841 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:17:07.841 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:17:07.841 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:17:07.841 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:17:07.841 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # sync 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set +e 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:07.842 rmmod nvme_tcp 00:17:07.842 rmmod nvme_fabrics 00:17:07.842 rmmod nvme_keyring 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@128 -- # set -e 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # return 0 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@517 -- # '[' -n 267722 ']' 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@518 -- # killprocess 267722 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 267722 ']' 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 267722 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 267722 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 267722' 00:17:07.842 killing process with pid 267722 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 267722 00:17:07.842 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 267722 00:17:08.101 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:08.101 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:08.101 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:08.101 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # iptr 00:17:08.101 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-save 00:17:08.101 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:08.101 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@791 -- # iptables-restore 00:17:08.101 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:08.101 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:08.101 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:08.101 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:08.101 05:17:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:10.639 00:17:10.639 real 0m32.810s 00:17:10.639 user 1m39.075s 00:17:10.639 sys 0m6.500s 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:17:10.639 ************************************ 00:17:10.639 END TEST nvmf_rpc 00:17:10.639 ************************************ 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:10.639 ************************************ 00:17:10.639 START TEST nvmf_invalid 00:17:10.639 ************************************ 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:17:10.639 * Looking for test storage... 00:17:10.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lcov --version 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:10.639 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:10.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.640 --rc genhtml_branch_coverage=1 00:17:10.640 --rc genhtml_function_coverage=1 00:17:10.640 --rc genhtml_legend=1 00:17:10.640 --rc geninfo_all_blocks=1 00:17:10.640 --rc geninfo_unexecuted_blocks=1 00:17:10.640 00:17:10.640 ' 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:10.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.640 --rc genhtml_branch_coverage=1 00:17:10.640 --rc genhtml_function_coverage=1 00:17:10.640 --rc genhtml_legend=1 00:17:10.640 --rc geninfo_all_blocks=1 00:17:10.640 --rc geninfo_unexecuted_blocks=1 00:17:10.640 00:17:10.640 ' 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:10.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.640 --rc genhtml_branch_coverage=1 00:17:10.640 --rc genhtml_function_coverage=1 00:17:10.640 --rc genhtml_legend=1 00:17:10.640 --rc geninfo_all_blocks=1 00:17:10.640 --rc geninfo_unexecuted_blocks=1 00:17:10.640 00:17:10.640 ' 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:10.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:10.640 --rc genhtml_branch_coverage=1 00:17:10.640 --rc genhtml_function_coverage=1 00:17:10.640 --rc genhtml_legend=1 00:17:10.640 --rc geninfo_all_blocks=1 00:17:10.640 --rc geninfo_unexecuted_blocks=1 00:17:10.640 00:17:10.640 ' 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # : 0 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:10.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:10.640 05:17:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:10.640 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:10.640 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:10.640 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # xtrace_disable 00:17:10.641 05:17:24 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # pci_devs=() 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # net_devs=() 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # e810=() 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # local -ga e810 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # x722=() 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # local -ga x722 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # mlx=() 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@322 -- # local -ga mlx 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:17.221 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:17.222 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:17.222 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:17.222 Found net devices under 0000:af:00.0: cvl_0_0 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:17.222 Found net devices under 0000:af:00.1: cvl_0_1 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@442 -- # is_hw=yes 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:17.222 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:17.222 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.281 ms 00:17:17.222 00:17:17.222 --- 10.0.0.2 ping statistics --- 00:17:17.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.222 rtt min/avg/max/mdev = 0.281/0.281/0.281/0.000 ms 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:17.222 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:17.222 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:17:17.222 00:17:17.222 --- 10.0.0.1 ping statistics --- 00:17:17.222 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:17.222 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # return 0 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@509 -- # nvmfpid=275217 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@510 -- # waitforlisten 275217 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 275217 ']' 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:17.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:17.222 05:17:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:17.222 [2024-12-15 05:17:29.964851] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:17.222 [2024-12-15 05:17:29.964901] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:17.223 [2024-12-15 05:17:30.046475] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:17.223 [2024-12-15 05:17:30.071387] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:17.223 [2024-12-15 05:17:30.071423] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:17.223 [2024-12-15 05:17:30.071430] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:17.223 [2024-12-15 05:17:30.071437] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:17.223 [2024-12-15 05:17:30.071443] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:17.223 [2024-12-15 05:17:30.072777] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:17.223 [2024-12-15 05:17:30.072889] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:17.223 [2024-12-15 05:17:30.073030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:17.223 [2024-12-15 05:17:30.073031] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode2509 00:17:17.223 [2024-12-15 05:17:30.381862] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:17:17.223 { 00:17:17.223 "nqn": "nqn.2016-06.io.spdk:cnode2509", 00:17:17.223 "tgt_name": "foobar", 00:17:17.223 "method": "nvmf_create_subsystem", 00:17:17.223 "req_id": 1 00:17:17.223 } 00:17:17.223 Got JSON-RPC error response 00:17:17.223 response: 00:17:17.223 { 00:17:17.223 "code": -32603, 00:17:17.223 "message": "Unable to find target foobar" 00:17:17.223 }' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:17:17.223 { 00:17:17.223 "nqn": "nqn.2016-06.io.spdk:cnode2509", 00:17:17.223 "tgt_name": "foobar", 00:17:17.223 "method": "nvmf_create_subsystem", 00:17:17.223 "req_id": 1 00:17:17.223 } 00:17:17.223 Got JSON-RPC error response 00:17:17.223 response: 00:17:17.223 { 00:17:17.223 "code": -32603, 00:17:17.223 "message": "Unable to find target foobar" 00:17:17.223 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode1888 00:17:17.223 [2024-12-15 05:17:30.594595] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1888: invalid serial number 'SPDKISFASTANDAWESOME' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:17:17.223 { 00:17:17.223 "nqn": "nqn.2016-06.io.spdk:cnode1888", 00:17:17.223 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:17.223 "method": "nvmf_create_subsystem", 00:17:17.223 "req_id": 1 00:17:17.223 } 00:17:17.223 Got JSON-RPC error response 00:17:17.223 response: 00:17:17.223 { 00:17:17.223 "code": -32602, 00:17:17.223 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:17.223 }' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:17:17.223 { 00:17:17.223 "nqn": "nqn.2016-06.io.spdk:cnode1888", 00:17:17.223 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:17:17.223 "method": "nvmf_create_subsystem", 00:17:17.223 "req_id": 1 00:17:17.223 } 00:17:17.223 Got JSON-RPC error response 00:17:17.223 response: 00:17:17.223 { 00:17:17.223 "code": -32602, 00:17:17.223 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:17:17.223 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode4040 00:17:17.223 [2024-12-15 05:17:30.807314] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4040: invalid model number 'SPDK_Controller' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:17:17.223 { 00:17:17.223 "nqn": "nqn.2016-06.io.spdk:cnode4040", 00:17:17.223 "model_number": "SPDK_Controller\u001f", 00:17:17.223 "method": "nvmf_create_subsystem", 00:17:17.223 "req_id": 1 00:17:17.223 } 00:17:17.223 Got JSON-RPC error response 00:17:17.223 response: 00:17:17.223 { 00:17:17.223 "code": -32602, 00:17:17.223 "message": "Invalid MN SPDK_Controller\u001f" 00:17:17.223 }' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:17:17.223 { 00:17:17.223 "nqn": "nqn.2016-06.io.spdk:cnode4040", 00:17:17.223 "model_number": "SPDK_Controller\u001f", 00:17:17.223 "method": "nvmf_create_subsystem", 00:17:17.223 "req_id": 1 00:17:17.223 } 00:17:17.223 Got JSON-RPC error response 00:17:17.223 response: 00:17:17.223 { 00:17:17.223 "code": -32602, 00:17:17.223 "message": "Invalid MN SPDK_Controller\u001f" 00:17:17.223 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:17:17.223 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:17:17.224 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.224 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.224 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 85 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x55' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=U 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 89 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x59' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Y 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ v == \- ]] 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'vZT`piy.xJr5?LU,YO&d&' 00:17:17.484 05:17:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'vZT`piy.xJr5?LU,YO&d&' nqn.2016-06.io.spdk:cnode21586 00:17:17.484 [2024-12-15 05:17:31.152469] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode21586: invalid serial number 'vZT`piy.xJr5?LU,YO&d&' 00:17:17.744 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:17:17.744 { 00:17:17.745 "nqn": "nqn.2016-06.io.spdk:cnode21586", 00:17:17.745 "serial_number": "vZT`piy.xJr5?LU,YO&d&", 00:17:17.745 "method": "nvmf_create_subsystem", 00:17:17.745 "req_id": 1 00:17:17.745 } 00:17:17.745 Got JSON-RPC error response 00:17:17.745 response: 00:17:17.745 { 00:17:17.745 "code": -32602, 00:17:17.745 "message": "Invalid SN vZT`piy.xJr5?LU,YO&d&" 00:17:17.745 }' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:17:17.745 { 00:17:17.745 "nqn": "nqn.2016-06.io.spdk:cnode21586", 00:17:17.745 "serial_number": "vZT`piy.xJr5?LU,YO&d&", 00:17:17.745 "method": "nvmf_create_subsystem", 00:17:17.745 "req_id": 1 00:17:17.745 } 00:17:17.745 Got JSON-RPC error response 00:17:17.745 response: 00:17:17.745 { 00:17:17.745 "code": -32602, 00:17:17.745 "message": "Invalid SN vZT`piy.xJr5?LU,YO&d&" 00:17:17.745 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:17:17.745 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 104 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x68' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=h 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 107 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6b' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=k 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:17.746 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ e == \- ]] 00:17:18.006 05:17:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'e7{ZPT!~..,"rT}'\''i:mK#{h`0k~ /dev/null' 00:17:20.082 05:17:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:22.620 00:17:22.620 real 0m11.942s 00:17:22.620 user 0m18.573s 00:17:22.620 sys 0m5.319s 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:17:22.620 ************************************ 00:17:22.620 END TEST nvmf_invalid 00:17:22.620 ************************************ 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:22.620 ************************************ 00:17:22.620 START TEST nvmf_connect_stress 00:17:22.620 ************************************ 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:17:22.620 * Looking for test storage... 00:17:22.620 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:22.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.620 --rc genhtml_branch_coverage=1 00:17:22.620 --rc genhtml_function_coverage=1 00:17:22.620 --rc genhtml_legend=1 00:17:22.620 --rc geninfo_all_blocks=1 00:17:22.620 --rc geninfo_unexecuted_blocks=1 00:17:22.620 00:17:22.620 ' 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:22.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.620 --rc genhtml_branch_coverage=1 00:17:22.620 --rc genhtml_function_coverage=1 00:17:22.620 --rc genhtml_legend=1 00:17:22.620 --rc geninfo_all_blocks=1 00:17:22.620 --rc geninfo_unexecuted_blocks=1 00:17:22.620 00:17:22.620 ' 00:17:22.620 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:22.620 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.621 --rc genhtml_branch_coverage=1 00:17:22.621 --rc genhtml_function_coverage=1 00:17:22.621 --rc genhtml_legend=1 00:17:22.621 --rc geninfo_all_blocks=1 00:17:22.621 --rc geninfo_unexecuted_blocks=1 00:17:22.621 00:17:22.621 ' 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:22.621 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:22.621 --rc genhtml_branch_coverage=1 00:17:22.621 --rc genhtml_function_coverage=1 00:17:22.621 --rc genhtml_legend=1 00:17:22.621 --rc geninfo_all_blocks=1 00:17:22.621 --rc geninfo_unexecuted_blocks=1 00:17:22.621 00:17:22.621 ' 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # : 0 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:22.621 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:22.621 05:17:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:22.621 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:17:22.621 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:22.621 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:22.621 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:22.621 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:22.621 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:22.621 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:22.621 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:22.621 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:22.621 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:22.621 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:22.621 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:17:22.621 05:17:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # net_devs=() 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # e810=() 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # local -ga e810 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # x722=() 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # local -ga x722 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # mlx=() 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:29.196 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:29.197 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:29.197 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:29.197 Found net devices under 0000:af:00.0: cvl_0_0 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:29.197 Found net devices under 0000:af:00.1: cvl_0_1 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:29.197 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:29.197 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.366 ms 00:17:29.197 00:17:29.197 --- 10.0.0.2 ping statistics --- 00:17:29.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.197 rtt min/avg/max/mdev = 0.366/0.366/0.366/0.000 ms 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:29.197 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:29.197 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:17:29.197 00:17:29.197 --- 10.0.0.1 ping statistics --- 00:17:29.197 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:29.197 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # return 0 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@509 -- # nvmfpid=279470 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@510 -- # waitforlisten 279470 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:17:29.197 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 279470 ']' 00:17:29.198 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:29.198 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:29.198 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:29.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:29.198 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:29.198 05:17:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.198 [2024-12-15 05:17:41.969100] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:29.198 [2024-12-15 05:17:41.969145] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:29.198 [2024-12-15 05:17:42.047419] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:29.198 [2024-12-15 05:17:42.069571] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:29.198 [2024-12-15 05:17:42.069604] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:29.198 [2024-12-15 05:17:42.069611] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:29.198 [2024-12-15 05:17:42.069617] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:29.198 [2024-12-15 05:17:42.069622] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:29.198 [2024-12-15 05:17:42.070939] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.198 [2024-12-15 05:17:42.071046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:29.198 [2024-12-15 05:17:42.071046] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.198 [2024-12-15 05:17:42.202597] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.198 [2024-12-15 05:17:42.226835] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.198 NULL1 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=279496 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.198 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.458 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.458 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:29.458 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.458 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.458 05:17:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.717 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.717 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:29.717 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.717 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.717 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:29.977 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.977 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:29.977 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:29.977 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.977 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.545 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.545 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:30.545 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.545 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.545 05:17:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:30.805 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.805 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:30.805 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:30.805 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.805 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.064 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.064 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:31.064 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.064 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.064 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.323 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.323 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:31.323 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.323 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.323 05:17:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:31.584 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.584 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:31.584 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:31.584 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.584 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.151 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.151 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:32.151 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.151 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.151 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.410 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.410 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:32.410 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.410 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.410 05:17:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.670 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.670 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:32.670 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.670 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.670 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:32.929 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.929 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:32.929 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:32.929 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.929 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.498 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.498 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:33.498 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.498 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.498 05:17:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:33.757 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.757 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:33.757 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:33.757 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.757 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.017 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.017 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:34.017 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.017 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.017 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.277 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.277 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:34.277 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.277 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.277 05:17:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:34.536 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.536 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:34.536 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:34.536 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.536 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.105 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.105 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:35.105 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.105 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.105 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.365 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.365 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:35.365 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.365 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.365 05:17:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.624 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.624 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:35.624 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.624 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.624 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:35.883 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.883 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:35.883 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:35.883 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.883 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.142 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.142 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:36.142 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.142 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.142 05:17:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.711 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.711 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:36.711 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.711 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.711 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:36.971 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.971 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:36.971 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:36.971 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.971 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.229 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.229 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:37.229 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.229 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.229 05:17:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:37.489 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.489 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:37.489 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:37.489 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.489 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.056 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.056 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:38.056 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.056 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.056 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.315 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.315 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:38.315 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.315 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.315 05:17:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.575 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.575 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:38.575 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.575 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.575 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.834 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.834 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:38.834 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:17:38.834 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.834 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:38.834 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:39.093 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.093 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 279496 00:17:39.093 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (279496) - No such process 00:17:39.093 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 279496 00:17:39.093 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:17:39.093 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:17:39.093 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:17:39.093 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:39.093 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # sync 00:17:39.093 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:39.093 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set +e 00:17:39.093 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:39.093 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:39.093 rmmod nvme_tcp 00:17:39.093 rmmod nvme_fabrics 00:17:39.352 rmmod nvme_keyring 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@128 -- # set -e 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # return 0 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@517 -- # '[' -n 279470 ']' 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@518 -- # killprocess 279470 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 279470 ']' 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 279470 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 279470 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 279470' 00:17:39.353 killing process with pid 279470 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 279470 00:17:39.353 05:17:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 279470 00:17:39.353 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:39.353 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:39.353 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:39.353 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # iptr 00:17:39.353 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-save 00:17:39.353 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:39.353 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@791 -- # iptables-restore 00:17:39.353 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:39.353 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:39.353 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:39.353 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:39.353 05:17:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:41.892 00:17:41.892 real 0m19.286s 00:17:41.892 user 0m42.092s 00:17:41.892 sys 0m6.820s 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:17:41.892 ************************************ 00:17:41.892 END TEST nvmf_connect_stress 00:17:41.892 ************************************ 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:41.892 ************************************ 00:17:41.892 START TEST nvmf_fused_ordering 00:17:41.892 ************************************ 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:17:41.892 * Looking for test storage... 00:17:41.892 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lcov --version 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:41.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.892 --rc genhtml_branch_coverage=1 00:17:41.892 --rc genhtml_function_coverage=1 00:17:41.892 --rc genhtml_legend=1 00:17:41.892 --rc geninfo_all_blocks=1 00:17:41.892 --rc geninfo_unexecuted_blocks=1 00:17:41.892 00:17:41.892 ' 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:41.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.892 --rc genhtml_branch_coverage=1 00:17:41.892 --rc genhtml_function_coverage=1 00:17:41.892 --rc genhtml_legend=1 00:17:41.892 --rc geninfo_all_blocks=1 00:17:41.892 --rc geninfo_unexecuted_blocks=1 00:17:41.892 00:17:41.892 ' 00:17:41.892 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:41.892 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.892 --rc genhtml_branch_coverage=1 00:17:41.893 --rc genhtml_function_coverage=1 00:17:41.893 --rc genhtml_legend=1 00:17:41.893 --rc geninfo_all_blocks=1 00:17:41.893 --rc geninfo_unexecuted_blocks=1 00:17:41.893 00:17:41.893 ' 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:41.893 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:41.893 --rc genhtml_branch_coverage=1 00:17:41.893 --rc genhtml_function_coverage=1 00:17:41.893 --rc genhtml_legend=1 00:17:41.893 --rc geninfo_all_blocks=1 00:17:41.893 --rc geninfo_unexecuted_blocks=1 00:17:41.893 00:17:41.893 ' 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # : 0 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:41.893 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # xtrace_disable 00:17:41.893 05:17:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # pci_devs=() 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # net_devs=() 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # e810=() 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # local -ga e810 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # x722=() 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # local -ga x722 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # mlx=() 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@322 -- # local -ga mlx 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:48.468 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:48.468 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:48.468 Found net devices under 0000:af:00.0: cvl_0_0 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:48.468 Found net devices under 0000:af:00.1: cvl_0_1 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@442 -- # is_hw=yes 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:48.468 05:18:00 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:48.468 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:48.468 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:48.469 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:48.469 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.332 ms 00:17:48.469 00:17:48.469 --- 10.0.0.2 ping statistics --- 00:17:48.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.469 rtt min/avg/max/mdev = 0.332/0.332/0.332/0.000 ms 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:48.469 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:48.469 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.141 ms 00:17:48.469 00:17:48.469 --- 10.0.0.1 ping statistics --- 00:17:48.469 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:48.469 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # return 0 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@509 -- # nvmfpid=284851 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@510 -- # waitforlisten 284851 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 284851 ']' 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.469 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.469 [2024-12-15 05:18:01.387671] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:48.469 [2024-12-15 05:18:01.387714] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:48.469 [2024-12-15 05:18:01.450660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.469 [2024-12-15 05:18:01.472125] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:48.469 [2024-12-15 05:18:01.472160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:48.469 [2024-12-15 05:18:01.472167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:48.469 [2024-12-15 05:18:01.472173] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:48.469 [2024-12-15 05:18:01.472179] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:48.469 [2024-12-15 05:18:01.472671] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.469 [2024-12-15 05:18:01.603389] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.469 [2024-12-15 05:18:01.623564] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.469 NULL1 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.469 05:18:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:17:48.469 [2024-12-15 05:18:01.679945] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:48.469 [2024-12-15 05:18:01.679976] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid284893 ] 00:17:48.469 Attached to nqn.2016-06.io.spdk:cnode1 00:17:48.469 Namespace ID: 1 size: 1GB 00:17:48.469 fused_ordering(0) 00:17:48.469 fused_ordering(1) 00:17:48.469 fused_ordering(2) 00:17:48.469 fused_ordering(3) 00:17:48.469 fused_ordering(4) 00:17:48.469 fused_ordering(5) 00:17:48.469 fused_ordering(6) 00:17:48.469 fused_ordering(7) 00:17:48.469 fused_ordering(8) 00:17:48.469 fused_ordering(9) 00:17:48.469 fused_ordering(10) 00:17:48.469 fused_ordering(11) 00:17:48.469 fused_ordering(12) 00:17:48.469 fused_ordering(13) 00:17:48.469 fused_ordering(14) 00:17:48.469 fused_ordering(15) 00:17:48.469 fused_ordering(16) 00:17:48.469 fused_ordering(17) 00:17:48.469 fused_ordering(18) 00:17:48.469 fused_ordering(19) 00:17:48.469 fused_ordering(20) 00:17:48.469 fused_ordering(21) 00:17:48.469 fused_ordering(22) 00:17:48.469 fused_ordering(23) 00:17:48.469 fused_ordering(24) 00:17:48.469 fused_ordering(25) 00:17:48.469 fused_ordering(26) 00:17:48.469 fused_ordering(27) 00:17:48.469 fused_ordering(28) 00:17:48.469 fused_ordering(29) 00:17:48.469 fused_ordering(30) 00:17:48.469 fused_ordering(31) 00:17:48.469 fused_ordering(32) 00:17:48.469 fused_ordering(33) 00:17:48.469 fused_ordering(34) 00:17:48.469 fused_ordering(35) 00:17:48.469 fused_ordering(36) 00:17:48.469 fused_ordering(37) 00:17:48.469 fused_ordering(38) 00:17:48.469 fused_ordering(39) 00:17:48.469 fused_ordering(40) 00:17:48.469 fused_ordering(41) 00:17:48.469 fused_ordering(42) 00:17:48.469 fused_ordering(43) 00:17:48.469 fused_ordering(44) 00:17:48.469 fused_ordering(45) 00:17:48.469 fused_ordering(46) 00:17:48.469 fused_ordering(47) 00:17:48.469 fused_ordering(48) 00:17:48.469 fused_ordering(49) 00:17:48.469 fused_ordering(50) 00:17:48.469 fused_ordering(51) 00:17:48.469 fused_ordering(52) 00:17:48.470 fused_ordering(53) 00:17:48.470 fused_ordering(54) 00:17:48.470 fused_ordering(55) 00:17:48.470 fused_ordering(56) 00:17:48.470 fused_ordering(57) 00:17:48.470 fused_ordering(58) 00:17:48.470 fused_ordering(59) 00:17:48.470 fused_ordering(60) 00:17:48.470 fused_ordering(61) 00:17:48.470 fused_ordering(62) 00:17:48.470 fused_ordering(63) 00:17:48.470 fused_ordering(64) 00:17:48.470 fused_ordering(65) 00:17:48.470 fused_ordering(66) 00:17:48.470 fused_ordering(67) 00:17:48.470 fused_ordering(68) 00:17:48.470 fused_ordering(69) 00:17:48.470 fused_ordering(70) 00:17:48.470 fused_ordering(71) 00:17:48.470 fused_ordering(72) 00:17:48.470 fused_ordering(73) 00:17:48.470 fused_ordering(74) 00:17:48.470 fused_ordering(75) 00:17:48.470 fused_ordering(76) 00:17:48.470 fused_ordering(77) 00:17:48.470 fused_ordering(78) 00:17:48.470 fused_ordering(79) 00:17:48.470 fused_ordering(80) 00:17:48.470 fused_ordering(81) 00:17:48.470 fused_ordering(82) 00:17:48.470 fused_ordering(83) 00:17:48.470 fused_ordering(84) 00:17:48.470 fused_ordering(85) 00:17:48.470 fused_ordering(86) 00:17:48.470 fused_ordering(87) 00:17:48.470 fused_ordering(88) 00:17:48.470 fused_ordering(89) 00:17:48.470 fused_ordering(90) 00:17:48.470 fused_ordering(91) 00:17:48.470 fused_ordering(92) 00:17:48.470 fused_ordering(93) 00:17:48.470 fused_ordering(94) 00:17:48.470 fused_ordering(95) 00:17:48.470 fused_ordering(96) 00:17:48.470 fused_ordering(97) 00:17:48.470 fused_ordering(98) 00:17:48.470 fused_ordering(99) 00:17:48.470 fused_ordering(100) 00:17:48.470 fused_ordering(101) 00:17:48.470 fused_ordering(102) 00:17:48.470 fused_ordering(103) 00:17:48.470 fused_ordering(104) 00:17:48.470 fused_ordering(105) 00:17:48.470 fused_ordering(106) 00:17:48.470 fused_ordering(107) 00:17:48.470 fused_ordering(108) 00:17:48.470 fused_ordering(109) 00:17:48.470 fused_ordering(110) 00:17:48.470 fused_ordering(111) 00:17:48.470 fused_ordering(112) 00:17:48.470 fused_ordering(113) 00:17:48.470 fused_ordering(114) 00:17:48.470 fused_ordering(115) 00:17:48.470 fused_ordering(116) 00:17:48.470 fused_ordering(117) 00:17:48.470 fused_ordering(118) 00:17:48.470 fused_ordering(119) 00:17:48.470 fused_ordering(120) 00:17:48.470 fused_ordering(121) 00:17:48.470 fused_ordering(122) 00:17:48.470 fused_ordering(123) 00:17:48.470 fused_ordering(124) 00:17:48.470 fused_ordering(125) 00:17:48.470 fused_ordering(126) 00:17:48.470 fused_ordering(127) 00:17:48.470 fused_ordering(128) 00:17:48.470 fused_ordering(129) 00:17:48.470 fused_ordering(130) 00:17:48.470 fused_ordering(131) 00:17:48.470 fused_ordering(132) 00:17:48.470 fused_ordering(133) 00:17:48.470 fused_ordering(134) 00:17:48.470 fused_ordering(135) 00:17:48.470 fused_ordering(136) 00:17:48.470 fused_ordering(137) 00:17:48.470 fused_ordering(138) 00:17:48.470 fused_ordering(139) 00:17:48.470 fused_ordering(140) 00:17:48.470 fused_ordering(141) 00:17:48.470 fused_ordering(142) 00:17:48.470 fused_ordering(143) 00:17:48.470 fused_ordering(144) 00:17:48.470 fused_ordering(145) 00:17:48.470 fused_ordering(146) 00:17:48.470 fused_ordering(147) 00:17:48.470 fused_ordering(148) 00:17:48.470 fused_ordering(149) 00:17:48.470 fused_ordering(150) 00:17:48.470 fused_ordering(151) 00:17:48.470 fused_ordering(152) 00:17:48.470 fused_ordering(153) 00:17:48.470 fused_ordering(154) 00:17:48.470 fused_ordering(155) 00:17:48.470 fused_ordering(156) 00:17:48.470 fused_ordering(157) 00:17:48.470 fused_ordering(158) 00:17:48.470 fused_ordering(159) 00:17:48.470 fused_ordering(160) 00:17:48.470 fused_ordering(161) 00:17:48.470 fused_ordering(162) 00:17:48.470 fused_ordering(163) 00:17:48.470 fused_ordering(164) 00:17:48.470 fused_ordering(165) 00:17:48.470 fused_ordering(166) 00:17:48.470 fused_ordering(167) 00:17:48.470 fused_ordering(168) 00:17:48.470 fused_ordering(169) 00:17:48.470 fused_ordering(170) 00:17:48.470 fused_ordering(171) 00:17:48.470 fused_ordering(172) 00:17:48.470 fused_ordering(173) 00:17:48.470 fused_ordering(174) 00:17:48.470 fused_ordering(175) 00:17:48.470 fused_ordering(176) 00:17:48.470 fused_ordering(177) 00:17:48.470 fused_ordering(178) 00:17:48.470 fused_ordering(179) 00:17:48.470 fused_ordering(180) 00:17:48.470 fused_ordering(181) 00:17:48.470 fused_ordering(182) 00:17:48.470 fused_ordering(183) 00:17:48.470 fused_ordering(184) 00:17:48.470 fused_ordering(185) 00:17:48.470 fused_ordering(186) 00:17:48.470 fused_ordering(187) 00:17:48.470 fused_ordering(188) 00:17:48.470 fused_ordering(189) 00:17:48.470 fused_ordering(190) 00:17:48.470 fused_ordering(191) 00:17:48.470 fused_ordering(192) 00:17:48.470 fused_ordering(193) 00:17:48.470 fused_ordering(194) 00:17:48.470 fused_ordering(195) 00:17:48.470 fused_ordering(196) 00:17:48.470 fused_ordering(197) 00:17:48.470 fused_ordering(198) 00:17:48.470 fused_ordering(199) 00:17:48.470 fused_ordering(200) 00:17:48.470 fused_ordering(201) 00:17:48.470 fused_ordering(202) 00:17:48.470 fused_ordering(203) 00:17:48.470 fused_ordering(204) 00:17:48.470 fused_ordering(205) 00:17:48.729 fused_ordering(206) 00:17:48.729 fused_ordering(207) 00:17:48.729 fused_ordering(208) 00:17:48.729 fused_ordering(209) 00:17:48.729 fused_ordering(210) 00:17:48.729 fused_ordering(211) 00:17:48.729 fused_ordering(212) 00:17:48.729 fused_ordering(213) 00:17:48.729 fused_ordering(214) 00:17:48.729 fused_ordering(215) 00:17:48.729 fused_ordering(216) 00:17:48.729 fused_ordering(217) 00:17:48.729 fused_ordering(218) 00:17:48.729 fused_ordering(219) 00:17:48.729 fused_ordering(220) 00:17:48.729 fused_ordering(221) 00:17:48.729 fused_ordering(222) 00:17:48.729 fused_ordering(223) 00:17:48.729 fused_ordering(224) 00:17:48.729 fused_ordering(225) 00:17:48.729 fused_ordering(226) 00:17:48.729 fused_ordering(227) 00:17:48.729 fused_ordering(228) 00:17:48.729 fused_ordering(229) 00:17:48.729 fused_ordering(230) 00:17:48.729 fused_ordering(231) 00:17:48.729 fused_ordering(232) 00:17:48.729 fused_ordering(233) 00:17:48.729 fused_ordering(234) 00:17:48.729 fused_ordering(235) 00:17:48.729 fused_ordering(236) 00:17:48.729 fused_ordering(237) 00:17:48.729 fused_ordering(238) 00:17:48.730 fused_ordering(239) 00:17:48.730 fused_ordering(240) 00:17:48.730 fused_ordering(241) 00:17:48.730 fused_ordering(242) 00:17:48.730 fused_ordering(243) 00:17:48.730 fused_ordering(244) 00:17:48.730 fused_ordering(245) 00:17:48.730 fused_ordering(246) 00:17:48.730 fused_ordering(247) 00:17:48.730 fused_ordering(248) 00:17:48.730 fused_ordering(249) 00:17:48.730 fused_ordering(250) 00:17:48.730 fused_ordering(251) 00:17:48.730 fused_ordering(252) 00:17:48.730 fused_ordering(253) 00:17:48.730 fused_ordering(254) 00:17:48.730 fused_ordering(255) 00:17:48.730 fused_ordering(256) 00:17:48.730 fused_ordering(257) 00:17:48.730 fused_ordering(258) 00:17:48.730 fused_ordering(259) 00:17:48.730 fused_ordering(260) 00:17:48.730 fused_ordering(261) 00:17:48.730 fused_ordering(262) 00:17:48.730 fused_ordering(263) 00:17:48.730 fused_ordering(264) 00:17:48.730 fused_ordering(265) 00:17:48.730 fused_ordering(266) 00:17:48.730 fused_ordering(267) 00:17:48.730 fused_ordering(268) 00:17:48.730 fused_ordering(269) 00:17:48.730 fused_ordering(270) 00:17:48.730 fused_ordering(271) 00:17:48.730 fused_ordering(272) 00:17:48.730 fused_ordering(273) 00:17:48.730 fused_ordering(274) 00:17:48.730 fused_ordering(275) 00:17:48.730 fused_ordering(276) 00:17:48.730 fused_ordering(277) 00:17:48.730 fused_ordering(278) 00:17:48.730 fused_ordering(279) 00:17:48.730 fused_ordering(280) 00:17:48.730 fused_ordering(281) 00:17:48.730 fused_ordering(282) 00:17:48.730 fused_ordering(283) 00:17:48.730 fused_ordering(284) 00:17:48.730 fused_ordering(285) 00:17:48.730 fused_ordering(286) 00:17:48.730 fused_ordering(287) 00:17:48.730 fused_ordering(288) 00:17:48.730 fused_ordering(289) 00:17:48.730 fused_ordering(290) 00:17:48.730 fused_ordering(291) 00:17:48.730 fused_ordering(292) 00:17:48.730 fused_ordering(293) 00:17:48.730 fused_ordering(294) 00:17:48.730 fused_ordering(295) 00:17:48.730 fused_ordering(296) 00:17:48.730 fused_ordering(297) 00:17:48.730 fused_ordering(298) 00:17:48.730 fused_ordering(299) 00:17:48.730 fused_ordering(300) 00:17:48.730 fused_ordering(301) 00:17:48.730 fused_ordering(302) 00:17:48.730 fused_ordering(303) 00:17:48.730 fused_ordering(304) 00:17:48.730 fused_ordering(305) 00:17:48.730 fused_ordering(306) 00:17:48.730 fused_ordering(307) 00:17:48.730 fused_ordering(308) 00:17:48.730 fused_ordering(309) 00:17:48.730 fused_ordering(310) 00:17:48.730 fused_ordering(311) 00:17:48.730 fused_ordering(312) 00:17:48.730 fused_ordering(313) 00:17:48.730 fused_ordering(314) 00:17:48.730 fused_ordering(315) 00:17:48.730 fused_ordering(316) 00:17:48.730 fused_ordering(317) 00:17:48.730 fused_ordering(318) 00:17:48.730 fused_ordering(319) 00:17:48.730 fused_ordering(320) 00:17:48.730 fused_ordering(321) 00:17:48.730 fused_ordering(322) 00:17:48.730 fused_ordering(323) 00:17:48.730 fused_ordering(324) 00:17:48.730 fused_ordering(325) 00:17:48.730 fused_ordering(326) 00:17:48.730 fused_ordering(327) 00:17:48.730 fused_ordering(328) 00:17:48.730 fused_ordering(329) 00:17:48.730 fused_ordering(330) 00:17:48.730 fused_ordering(331) 00:17:48.730 fused_ordering(332) 00:17:48.730 fused_ordering(333) 00:17:48.730 fused_ordering(334) 00:17:48.730 fused_ordering(335) 00:17:48.730 fused_ordering(336) 00:17:48.730 fused_ordering(337) 00:17:48.730 fused_ordering(338) 00:17:48.730 fused_ordering(339) 00:17:48.730 fused_ordering(340) 00:17:48.730 fused_ordering(341) 00:17:48.730 fused_ordering(342) 00:17:48.730 fused_ordering(343) 00:17:48.730 fused_ordering(344) 00:17:48.730 fused_ordering(345) 00:17:48.730 fused_ordering(346) 00:17:48.730 fused_ordering(347) 00:17:48.730 fused_ordering(348) 00:17:48.730 fused_ordering(349) 00:17:48.730 fused_ordering(350) 00:17:48.730 fused_ordering(351) 00:17:48.730 fused_ordering(352) 00:17:48.730 fused_ordering(353) 00:17:48.730 fused_ordering(354) 00:17:48.730 fused_ordering(355) 00:17:48.730 fused_ordering(356) 00:17:48.730 fused_ordering(357) 00:17:48.730 fused_ordering(358) 00:17:48.730 fused_ordering(359) 00:17:48.730 fused_ordering(360) 00:17:48.730 fused_ordering(361) 00:17:48.730 fused_ordering(362) 00:17:48.730 fused_ordering(363) 00:17:48.730 fused_ordering(364) 00:17:48.730 fused_ordering(365) 00:17:48.730 fused_ordering(366) 00:17:48.730 fused_ordering(367) 00:17:48.730 fused_ordering(368) 00:17:48.730 fused_ordering(369) 00:17:48.730 fused_ordering(370) 00:17:48.730 fused_ordering(371) 00:17:48.730 fused_ordering(372) 00:17:48.730 fused_ordering(373) 00:17:48.730 fused_ordering(374) 00:17:48.730 fused_ordering(375) 00:17:48.730 fused_ordering(376) 00:17:48.730 fused_ordering(377) 00:17:48.730 fused_ordering(378) 00:17:48.730 fused_ordering(379) 00:17:48.730 fused_ordering(380) 00:17:48.730 fused_ordering(381) 00:17:48.730 fused_ordering(382) 00:17:48.730 fused_ordering(383) 00:17:48.730 fused_ordering(384) 00:17:48.730 fused_ordering(385) 00:17:48.730 fused_ordering(386) 00:17:48.730 fused_ordering(387) 00:17:48.730 fused_ordering(388) 00:17:48.730 fused_ordering(389) 00:17:48.730 fused_ordering(390) 00:17:48.730 fused_ordering(391) 00:17:48.730 fused_ordering(392) 00:17:48.730 fused_ordering(393) 00:17:48.730 fused_ordering(394) 00:17:48.730 fused_ordering(395) 00:17:48.730 fused_ordering(396) 00:17:48.730 fused_ordering(397) 00:17:48.730 fused_ordering(398) 00:17:48.730 fused_ordering(399) 00:17:48.730 fused_ordering(400) 00:17:48.730 fused_ordering(401) 00:17:48.730 fused_ordering(402) 00:17:48.730 fused_ordering(403) 00:17:48.730 fused_ordering(404) 00:17:48.730 fused_ordering(405) 00:17:48.730 fused_ordering(406) 00:17:48.730 fused_ordering(407) 00:17:48.730 fused_ordering(408) 00:17:48.730 fused_ordering(409) 00:17:48.730 fused_ordering(410) 00:17:48.989 fused_ordering(411) 00:17:48.989 fused_ordering(412) 00:17:48.989 fused_ordering(413) 00:17:48.989 fused_ordering(414) 00:17:48.989 fused_ordering(415) 00:17:48.989 fused_ordering(416) 00:17:48.989 fused_ordering(417) 00:17:48.989 fused_ordering(418) 00:17:48.989 fused_ordering(419) 00:17:48.989 fused_ordering(420) 00:17:48.989 fused_ordering(421) 00:17:48.989 fused_ordering(422) 00:17:48.989 fused_ordering(423) 00:17:48.989 fused_ordering(424) 00:17:48.989 fused_ordering(425) 00:17:48.989 fused_ordering(426) 00:17:48.989 fused_ordering(427) 00:17:48.989 fused_ordering(428) 00:17:48.989 fused_ordering(429) 00:17:48.989 fused_ordering(430) 00:17:48.989 fused_ordering(431) 00:17:48.989 fused_ordering(432) 00:17:48.989 fused_ordering(433) 00:17:48.989 fused_ordering(434) 00:17:48.989 fused_ordering(435) 00:17:48.989 fused_ordering(436) 00:17:48.989 fused_ordering(437) 00:17:48.989 fused_ordering(438) 00:17:48.989 fused_ordering(439) 00:17:48.989 fused_ordering(440) 00:17:48.989 fused_ordering(441) 00:17:48.989 fused_ordering(442) 00:17:48.989 fused_ordering(443) 00:17:48.989 fused_ordering(444) 00:17:48.989 fused_ordering(445) 00:17:48.989 fused_ordering(446) 00:17:48.989 fused_ordering(447) 00:17:48.989 fused_ordering(448) 00:17:48.989 fused_ordering(449) 00:17:48.989 fused_ordering(450) 00:17:48.989 fused_ordering(451) 00:17:48.989 fused_ordering(452) 00:17:48.989 fused_ordering(453) 00:17:48.989 fused_ordering(454) 00:17:48.989 fused_ordering(455) 00:17:48.989 fused_ordering(456) 00:17:48.989 fused_ordering(457) 00:17:48.989 fused_ordering(458) 00:17:48.989 fused_ordering(459) 00:17:48.989 fused_ordering(460) 00:17:48.989 fused_ordering(461) 00:17:48.989 fused_ordering(462) 00:17:48.989 fused_ordering(463) 00:17:48.989 fused_ordering(464) 00:17:48.989 fused_ordering(465) 00:17:48.989 fused_ordering(466) 00:17:48.989 fused_ordering(467) 00:17:48.989 fused_ordering(468) 00:17:48.989 fused_ordering(469) 00:17:48.989 fused_ordering(470) 00:17:48.989 fused_ordering(471) 00:17:48.989 fused_ordering(472) 00:17:48.989 fused_ordering(473) 00:17:48.989 fused_ordering(474) 00:17:48.989 fused_ordering(475) 00:17:48.989 fused_ordering(476) 00:17:48.989 fused_ordering(477) 00:17:48.989 fused_ordering(478) 00:17:48.989 fused_ordering(479) 00:17:48.989 fused_ordering(480) 00:17:48.989 fused_ordering(481) 00:17:48.989 fused_ordering(482) 00:17:48.989 fused_ordering(483) 00:17:48.989 fused_ordering(484) 00:17:48.989 fused_ordering(485) 00:17:48.989 fused_ordering(486) 00:17:48.989 fused_ordering(487) 00:17:48.989 fused_ordering(488) 00:17:48.989 fused_ordering(489) 00:17:48.989 fused_ordering(490) 00:17:48.989 fused_ordering(491) 00:17:48.989 fused_ordering(492) 00:17:48.989 fused_ordering(493) 00:17:48.989 fused_ordering(494) 00:17:48.989 fused_ordering(495) 00:17:48.989 fused_ordering(496) 00:17:48.989 fused_ordering(497) 00:17:48.989 fused_ordering(498) 00:17:48.989 fused_ordering(499) 00:17:48.989 fused_ordering(500) 00:17:48.989 fused_ordering(501) 00:17:48.989 fused_ordering(502) 00:17:48.989 fused_ordering(503) 00:17:48.989 fused_ordering(504) 00:17:48.989 fused_ordering(505) 00:17:48.989 fused_ordering(506) 00:17:48.989 fused_ordering(507) 00:17:48.989 fused_ordering(508) 00:17:48.989 fused_ordering(509) 00:17:48.989 fused_ordering(510) 00:17:48.989 fused_ordering(511) 00:17:48.990 fused_ordering(512) 00:17:48.990 fused_ordering(513) 00:17:48.990 fused_ordering(514) 00:17:48.990 fused_ordering(515) 00:17:48.990 fused_ordering(516) 00:17:48.990 fused_ordering(517) 00:17:48.990 fused_ordering(518) 00:17:48.990 fused_ordering(519) 00:17:48.990 fused_ordering(520) 00:17:48.990 fused_ordering(521) 00:17:48.990 fused_ordering(522) 00:17:48.990 fused_ordering(523) 00:17:48.990 fused_ordering(524) 00:17:48.990 fused_ordering(525) 00:17:48.990 fused_ordering(526) 00:17:48.990 fused_ordering(527) 00:17:48.990 fused_ordering(528) 00:17:48.990 fused_ordering(529) 00:17:48.990 fused_ordering(530) 00:17:48.990 fused_ordering(531) 00:17:48.990 fused_ordering(532) 00:17:48.990 fused_ordering(533) 00:17:48.990 fused_ordering(534) 00:17:48.990 fused_ordering(535) 00:17:48.990 fused_ordering(536) 00:17:48.990 fused_ordering(537) 00:17:48.990 fused_ordering(538) 00:17:48.990 fused_ordering(539) 00:17:48.990 fused_ordering(540) 00:17:48.990 fused_ordering(541) 00:17:48.990 fused_ordering(542) 00:17:48.990 fused_ordering(543) 00:17:48.990 fused_ordering(544) 00:17:48.990 fused_ordering(545) 00:17:48.990 fused_ordering(546) 00:17:48.990 fused_ordering(547) 00:17:48.990 fused_ordering(548) 00:17:48.990 fused_ordering(549) 00:17:48.990 fused_ordering(550) 00:17:48.990 fused_ordering(551) 00:17:48.990 fused_ordering(552) 00:17:48.990 fused_ordering(553) 00:17:48.990 fused_ordering(554) 00:17:48.990 fused_ordering(555) 00:17:48.990 fused_ordering(556) 00:17:48.990 fused_ordering(557) 00:17:48.990 fused_ordering(558) 00:17:48.990 fused_ordering(559) 00:17:48.990 fused_ordering(560) 00:17:48.990 fused_ordering(561) 00:17:48.990 fused_ordering(562) 00:17:48.990 fused_ordering(563) 00:17:48.990 fused_ordering(564) 00:17:48.990 fused_ordering(565) 00:17:48.990 fused_ordering(566) 00:17:48.990 fused_ordering(567) 00:17:48.990 fused_ordering(568) 00:17:48.990 fused_ordering(569) 00:17:48.990 fused_ordering(570) 00:17:48.990 fused_ordering(571) 00:17:48.990 fused_ordering(572) 00:17:48.990 fused_ordering(573) 00:17:48.990 fused_ordering(574) 00:17:48.990 fused_ordering(575) 00:17:48.990 fused_ordering(576) 00:17:48.990 fused_ordering(577) 00:17:48.990 fused_ordering(578) 00:17:48.990 fused_ordering(579) 00:17:48.990 fused_ordering(580) 00:17:48.990 fused_ordering(581) 00:17:48.990 fused_ordering(582) 00:17:48.990 fused_ordering(583) 00:17:48.990 fused_ordering(584) 00:17:48.990 fused_ordering(585) 00:17:48.990 fused_ordering(586) 00:17:48.990 fused_ordering(587) 00:17:48.990 fused_ordering(588) 00:17:48.990 fused_ordering(589) 00:17:48.990 fused_ordering(590) 00:17:48.990 fused_ordering(591) 00:17:48.990 fused_ordering(592) 00:17:48.990 fused_ordering(593) 00:17:48.990 fused_ordering(594) 00:17:48.990 fused_ordering(595) 00:17:48.990 fused_ordering(596) 00:17:48.990 fused_ordering(597) 00:17:48.990 fused_ordering(598) 00:17:48.990 fused_ordering(599) 00:17:48.990 fused_ordering(600) 00:17:48.990 fused_ordering(601) 00:17:48.990 fused_ordering(602) 00:17:48.990 fused_ordering(603) 00:17:48.990 fused_ordering(604) 00:17:48.990 fused_ordering(605) 00:17:48.990 fused_ordering(606) 00:17:48.990 fused_ordering(607) 00:17:48.990 fused_ordering(608) 00:17:48.990 fused_ordering(609) 00:17:48.990 fused_ordering(610) 00:17:48.990 fused_ordering(611) 00:17:48.990 fused_ordering(612) 00:17:48.990 fused_ordering(613) 00:17:48.990 fused_ordering(614) 00:17:48.990 fused_ordering(615) 00:17:49.558 fused_ordering(616) 00:17:49.558 fused_ordering(617) 00:17:49.558 fused_ordering(618) 00:17:49.558 fused_ordering(619) 00:17:49.558 fused_ordering(620) 00:17:49.558 fused_ordering(621) 00:17:49.558 fused_ordering(622) 00:17:49.558 fused_ordering(623) 00:17:49.558 fused_ordering(624) 00:17:49.558 fused_ordering(625) 00:17:49.558 fused_ordering(626) 00:17:49.558 fused_ordering(627) 00:17:49.558 fused_ordering(628) 00:17:49.558 fused_ordering(629) 00:17:49.558 fused_ordering(630) 00:17:49.558 fused_ordering(631) 00:17:49.558 fused_ordering(632) 00:17:49.558 fused_ordering(633) 00:17:49.558 fused_ordering(634) 00:17:49.558 fused_ordering(635) 00:17:49.558 fused_ordering(636) 00:17:49.558 fused_ordering(637) 00:17:49.558 fused_ordering(638) 00:17:49.558 fused_ordering(639) 00:17:49.558 fused_ordering(640) 00:17:49.558 fused_ordering(641) 00:17:49.558 fused_ordering(642) 00:17:49.558 fused_ordering(643) 00:17:49.558 fused_ordering(644) 00:17:49.558 fused_ordering(645) 00:17:49.558 fused_ordering(646) 00:17:49.558 fused_ordering(647) 00:17:49.558 fused_ordering(648) 00:17:49.558 fused_ordering(649) 00:17:49.558 fused_ordering(650) 00:17:49.558 fused_ordering(651) 00:17:49.558 fused_ordering(652) 00:17:49.558 fused_ordering(653) 00:17:49.558 fused_ordering(654) 00:17:49.558 fused_ordering(655) 00:17:49.558 fused_ordering(656) 00:17:49.558 fused_ordering(657) 00:17:49.558 fused_ordering(658) 00:17:49.558 fused_ordering(659) 00:17:49.558 fused_ordering(660) 00:17:49.558 fused_ordering(661) 00:17:49.558 fused_ordering(662) 00:17:49.558 fused_ordering(663) 00:17:49.558 fused_ordering(664) 00:17:49.558 fused_ordering(665) 00:17:49.558 fused_ordering(666) 00:17:49.558 fused_ordering(667) 00:17:49.558 fused_ordering(668) 00:17:49.558 fused_ordering(669) 00:17:49.558 fused_ordering(670) 00:17:49.558 fused_ordering(671) 00:17:49.558 fused_ordering(672) 00:17:49.558 fused_ordering(673) 00:17:49.558 fused_ordering(674) 00:17:49.558 fused_ordering(675) 00:17:49.558 fused_ordering(676) 00:17:49.558 fused_ordering(677) 00:17:49.558 fused_ordering(678) 00:17:49.558 fused_ordering(679) 00:17:49.558 fused_ordering(680) 00:17:49.558 fused_ordering(681) 00:17:49.558 fused_ordering(682) 00:17:49.558 fused_ordering(683) 00:17:49.558 fused_ordering(684) 00:17:49.558 fused_ordering(685) 00:17:49.558 fused_ordering(686) 00:17:49.558 fused_ordering(687) 00:17:49.558 fused_ordering(688) 00:17:49.558 fused_ordering(689) 00:17:49.558 fused_ordering(690) 00:17:49.558 fused_ordering(691) 00:17:49.558 fused_ordering(692) 00:17:49.558 fused_ordering(693) 00:17:49.558 fused_ordering(694) 00:17:49.558 fused_ordering(695) 00:17:49.558 fused_ordering(696) 00:17:49.558 fused_ordering(697) 00:17:49.558 fused_ordering(698) 00:17:49.558 fused_ordering(699) 00:17:49.558 fused_ordering(700) 00:17:49.558 fused_ordering(701) 00:17:49.558 fused_ordering(702) 00:17:49.558 fused_ordering(703) 00:17:49.558 fused_ordering(704) 00:17:49.558 fused_ordering(705) 00:17:49.558 fused_ordering(706) 00:17:49.558 fused_ordering(707) 00:17:49.558 fused_ordering(708) 00:17:49.558 fused_ordering(709) 00:17:49.558 fused_ordering(710) 00:17:49.558 fused_ordering(711) 00:17:49.558 fused_ordering(712) 00:17:49.558 fused_ordering(713) 00:17:49.558 fused_ordering(714) 00:17:49.558 fused_ordering(715) 00:17:49.558 fused_ordering(716) 00:17:49.558 fused_ordering(717) 00:17:49.558 fused_ordering(718) 00:17:49.558 fused_ordering(719) 00:17:49.558 fused_ordering(720) 00:17:49.558 fused_ordering(721) 00:17:49.558 fused_ordering(722) 00:17:49.558 fused_ordering(723) 00:17:49.558 fused_ordering(724) 00:17:49.558 fused_ordering(725) 00:17:49.558 fused_ordering(726) 00:17:49.558 fused_ordering(727) 00:17:49.558 fused_ordering(728) 00:17:49.558 fused_ordering(729) 00:17:49.558 fused_ordering(730) 00:17:49.558 fused_ordering(731) 00:17:49.558 fused_ordering(732) 00:17:49.558 fused_ordering(733) 00:17:49.558 fused_ordering(734) 00:17:49.558 fused_ordering(735) 00:17:49.558 fused_ordering(736) 00:17:49.558 fused_ordering(737) 00:17:49.558 fused_ordering(738) 00:17:49.558 fused_ordering(739) 00:17:49.558 fused_ordering(740) 00:17:49.558 fused_ordering(741) 00:17:49.559 fused_ordering(742) 00:17:49.559 fused_ordering(743) 00:17:49.559 fused_ordering(744) 00:17:49.559 fused_ordering(745) 00:17:49.559 fused_ordering(746) 00:17:49.559 fused_ordering(747) 00:17:49.559 fused_ordering(748) 00:17:49.559 fused_ordering(749) 00:17:49.559 fused_ordering(750) 00:17:49.559 fused_ordering(751) 00:17:49.559 fused_ordering(752) 00:17:49.559 fused_ordering(753) 00:17:49.559 fused_ordering(754) 00:17:49.559 fused_ordering(755) 00:17:49.559 fused_ordering(756) 00:17:49.559 fused_ordering(757) 00:17:49.559 fused_ordering(758) 00:17:49.559 fused_ordering(759) 00:17:49.559 fused_ordering(760) 00:17:49.559 fused_ordering(761) 00:17:49.559 fused_ordering(762) 00:17:49.559 fused_ordering(763) 00:17:49.559 fused_ordering(764) 00:17:49.559 fused_ordering(765) 00:17:49.559 fused_ordering(766) 00:17:49.559 fused_ordering(767) 00:17:49.559 fused_ordering(768) 00:17:49.559 fused_ordering(769) 00:17:49.559 fused_ordering(770) 00:17:49.559 fused_ordering(771) 00:17:49.559 fused_ordering(772) 00:17:49.559 fused_ordering(773) 00:17:49.559 fused_ordering(774) 00:17:49.559 fused_ordering(775) 00:17:49.559 fused_ordering(776) 00:17:49.559 fused_ordering(777) 00:17:49.559 fused_ordering(778) 00:17:49.559 fused_ordering(779) 00:17:49.559 fused_ordering(780) 00:17:49.559 fused_ordering(781) 00:17:49.559 fused_ordering(782) 00:17:49.559 fused_ordering(783) 00:17:49.559 fused_ordering(784) 00:17:49.559 fused_ordering(785) 00:17:49.559 fused_ordering(786) 00:17:49.559 fused_ordering(787) 00:17:49.559 fused_ordering(788) 00:17:49.559 fused_ordering(789) 00:17:49.559 fused_ordering(790) 00:17:49.559 fused_ordering(791) 00:17:49.559 fused_ordering(792) 00:17:49.559 fused_ordering(793) 00:17:49.559 fused_ordering(794) 00:17:49.559 fused_ordering(795) 00:17:49.559 fused_ordering(796) 00:17:49.559 fused_ordering(797) 00:17:49.559 fused_ordering(798) 00:17:49.559 fused_ordering(799) 00:17:49.559 fused_ordering(800) 00:17:49.559 fused_ordering(801) 00:17:49.559 fused_ordering(802) 00:17:49.559 fused_ordering(803) 00:17:49.559 fused_ordering(804) 00:17:49.559 fused_ordering(805) 00:17:49.559 fused_ordering(806) 00:17:49.559 fused_ordering(807) 00:17:49.559 fused_ordering(808) 00:17:49.559 fused_ordering(809) 00:17:49.559 fused_ordering(810) 00:17:49.559 fused_ordering(811) 00:17:49.559 fused_ordering(812) 00:17:49.559 fused_ordering(813) 00:17:49.559 fused_ordering(814) 00:17:49.559 fused_ordering(815) 00:17:49.559 fused_ordering(816) 00:17:49.559 fused_ordering(817) 00:17:49.559 fused_ordering(818) 00:17:49.559 fused_ordering(819) 00:17:49.559 fused_ordering(820) 00:17:49.819 fused_ordering(821) 00:17:49.819 fused_ordering(822) 00:17:49.819 fused_ordering(823) 00:17:49.819 fused_ordering(824) 00:17:49.819 fused_ordering(825) 00:17:49.819 fused_ordering(826) 00:17:49.819 fused_ordering(827) 00:17:49.819 fused_ordering(828) 00:17:49.819 fused_ordering(829) 00:17:49.819 fused_ordering(830) 00:17:49.819 fused_ordering(831) 00:17:49.819 fused_ordering(832) 00:17:49.819 fused_ordering(833) 00:17:49.819 fused_ordering(834) 00:17:49.819 fused_ordering(835) 00:17:49.819 fused_ordering(836) 00:17:49.819 fused_ordering(837) 00:17:49.819 fused_ordering(838) 00:17:49.819 fused_ordering(839) 00:17:49.819 fused_ordering(840) 00:17:49.819 fused_ordering(841) 00:17:49.819 fused_ordering(842) 00:17:49.819 fused_ordering(843) 00:17:49.819 fused_ordering(844) 00:17:49.819 fused_ordering(845) 00:17:49.819 fused_ordering(846) 00:17:49.819 fused_ordering(847) 00:17:49.819 fused_ordering(848) 00:17:49.819 fused_ordering(849) 00:17:49.819 fused_ordering(850) 00:17:49.819 fused_ordering(851) 00:17:49.819 fused_ordering(852) 00:17:49.819 fused_ordering(853) 00:17:49.819 fused_ordering(854) 00:17:49.819 fused_ordering(855) 00:17:49.819 fused_ordering(856) 00:17:49.819 fused_ordering(857) 00:17:49.819 fused_ordering(858) 00:17:49.819 fused_ordering(859) 00:17:49.819 fused_ordering(860) 00:17:49.819 fused_ordering(861) 00:17:49.819 fused_ordering(862) 00:17:49.819 fused_ordering(863) 00:17:49.819 fused_ordering(864) 00:17:49.819 fused_ordering(865) 00:17:49.819 fused_ordering(866) 00:17:49.819 fused_ordering(867) 00:17:49.819 fused_ordering(868) 00:17:49.819 fused_ordering(869) 00:17:49.819 fused_ordering(870) 00:17:49.819 fused_ordering(871) 00:17:49.819 fused_ordering(872) 00:17:49.819 fused_ordering(873) 00:17:49.819 fused_ordering(874) 00:17:49.819 fused_ordering(875) 00:17:49.819 fused_ordering(876) 00:17:49.819 fused_ordering(877) 00:17:49.819 fused_ordering(878) 00:17:49.819 fused_ordering(879) 00:17:49.819 fused_ordering(880) 00:17:49.819 fused_ordering(881) 00:17:49.819 fused_ordering(882) 00:17:49.819 fused_ordering(883) 00:17:49.819 fused_ordering(884) 00:17:49.819 fused_ordering(885) 00:17:49.819 fused_ordering(886) 00:17:49.819 fused_ordering(887) 00:17:49.819 fused_ordering(888) 00:17:49.819 fused_ordering(889) 00:17:49.819 fused_ordering(890) 00:17:49.819 fused_ordering(891) 00:17:49.819 fused_ordering(892) 00:17:49.819 fused_ordering(893) 00:17:49.819 fused_ordering(894) 00:17:49.819 fused_ordering(895) 00:17:49.819 fused_ordering(896) 00:17:49.819 fused_ordering(897) 00:17:49.819 fused_ordering(898) 00:17:49.819 fused_ordering(899) 00:17:49.819 fused_ordering(900) 00:17:49.819 fused_ordering(901) 00:17:49.819 fused_ordering(902) 00:17:49.819 fused_ordering(903) 00:17:49.819 fused_ordering(904) 00:17:49.819 fused_ordering(905) 00:17:49.819 fused_ordering(906) 00:17:49.819 fused_ordering(907) 00:17:49.819 fused_ordering(908) 00:17:49.819 fused_ordering(909) 00:17:49.819 fused_ordering(910) 00:17:49.819 fused_ordering(911) 00:17:49.819 fused_ordering(912) 00:17:49.819 fused_ordering(913) 00:17:49.819 fused_ordering(914) 00:17:49.819 fused_ordering(915) 00:17:49.819 fused_ordering(916) 00:17:49.819 fused_ordering(917) 00:17:49.819 fused_ordering(918) 00:17:49.819 fused_ordering(919) 00:17:49.819 fused_ordering(920) 00:17:49.819 fused_ordering(921) 00:17:49.819 fused_ordering(922) 00:17:49.819 fused_ordering(923) 00:17:49.819 fused_ordering(924) 00:17:49.819 fused_ordering(925) 00:17:49.819 fused_ordering(926) 00:17:49.819 fused_ordering(927) 00:17:49.819 fused_ordering(928) 00:17:49.819 fused_ordering(929) 00:17:49.819 fused_ordering(930) 00:17:49.819 fused_ordering(931) 00:17:49.819 fused_ordering(932) 00:17:49.819 fused_ordering(933) 00:17:49.819 fused_ordering(934) 00:17:49.819 fused_ordering(935) 00:17:49.819 fused_ordering(936) 00:17:49.819 fused_ordering(937) 00:17:49.819 fused_ordering(938) 00:17:49.819 fused_ordering(939) 00:17:49.819 fused_ordering(940) 00:17:49.819 fused_ordering(941) 00:17:49.819 fused_ordering(942) 00:17:49.819 fused_ordering(943) 00:17:49.819 fused_ordering(944) 00:17:49.819 fused_ordering(945) 00:17:49.819 fused_ordering(946) 00:17:49.819 fused_ordering(947) 00:17:49.819 fused_ordering(948) 00:17:49.819 fused_ordering(949) 00:17:49.819 fused_ordering(950) 00:17:49.819 fused_ordering(951) 00:17:49.819 fused_ordering(952) 00:17:49.819 fused_ordering(953) 00:17:49.819 fused_ordering(954) 00:17:49.819 fused_ordering(955) 00:17:49.819 fused_ordering(956) 00:17:49.819 fused_ordering(957) 00:17:49.819 fused_ordering(958) 00:17:49.819 fused_ordering(959) 00:17:49.819 fused_ordering(960) 00:17:49.819 fused_ordering(961) 00:17:49.819 fused_ordering(962) 00:17:49.819 fused_ordering(963) 00:17:49.819 fused_ordering(964) 00:17:49.819 fused_ordering(965) 00:17:49.819 fused_ordering(966) 00:17:49.819 fused_ordering(967) 00:17:49.819 fused_ordering(968) 00:17:49.819 fused_ordering(969) 00:17:49.819 fused_ordering(970) 00:17:49.819 fused_ordering(971) 00:17:49.819 fused_ordering(972) 00:17:49.819 fused_ordering(973) 00:17:49.819 fused_ordering(974) 00:17:49.819 fused_ordering(975) 00:17:49.819 fused_ordering(976) 00:17:49.819 fused_ordering(977) 00:17:49.819 fused_ordering(978) 00:17:49.819 fused_ordering(979) 00:17:49.819 fused_ordering(980) 00:17:49.819 fused_ordering(981) 00:17:49.819 fused_ordering(982) 00:17:49.819 fused_ordering(983) 00:17:49.819 fused_ordering(984) 00:17:49.819 fused_ordering(985) 00:17:49.819 fused_ordering(986) 00:17:49.819 fused_ordering(987) 00:17:49.819 fused_ordering(988) 00:17:49.819 fused_ordering(989) 00:17:49.819 fused_ordering(990) 00:17:49.819 fused_ordering(991) 00:17:49.819 fused_ordering(992) 00:17:49.819 fused_ordering(993) 00:17:49.819 fused_ordering(994) 00:17:49.819 fused_ordering(995) 00:17:49.819 fused_ordering(996) 00:17:49.819 fused_ordering(997) 00:17:49.819 fused_ordering(998) 00:17:49.819 fused_ordering(999) 00:17:49.819 fused_ordering(1000) 00:17:49.819 fused_ordering(1001) 00:17:49.819 fused_ordering(1002) 00:17:49.819 fused_ordering(1003) 00:17:49.819 fused_ordering(1004) 00:17:49.819 fused_ordering(1005) 00:17:49.819 fused_ordering(1006) 00:17:49.819 fused_ordering(1007) 00:17:49.819 fused_ordering(1008) 00:17:49.819 fused_ordering(1009) 00:17:49.819 fused_ordering(1010) 00:17:49.819 fused_ordering(1011) 00:17:49.819 fused_ordering(1012) 00:17:49.819 fused_ordering(1013) 00:17:49.819 fused_ordering(1014) 00:17:49.819 fused_ordering(1015) 00:17:49.819 fused_ordering(1016) 00:17:49.819 fused_ordering(1017) 00:17:49.819 fused_ordering(1018) 00:17:49.819 fused_ordering(1019) 00:17:49.819 fused_ordering(1020) 00:17:49.819 fused_ordering(1021) 00:17:49.819 fused_ordering(1022) 00:17:49.819 fused_ordering(1023) 00:17:49.819 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:17:49.819 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:17:49.819 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@516 -- # nvmfcleanup 00:17:49.819 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # sync 00:17:49.819 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:17:49.819 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set +e 00:17:49.819 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # for i in {1..20} 00:17:49.819 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:17:49.819 rmmod nvme_tcp 00:17:49.819 rmmod nvme_fabrics 00:17:49.819 rmmod nvme_keyring 00:17:49.819 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:17:49.819 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@128 -- # set -e 00:17:49.819 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # return 0 00:17:49.819 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@517 -- # '[' -n 284851 ']' 00:17:49.819 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@518 -- # killprocess 284851 00:17:49.820 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 284851 ']' 00:17:49.820 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 284851 00:17:49.820 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:17:49.820 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:49.820 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 284851 00:17:49.820 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:49.820 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:49.820 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 284851' 00:17:49.820 killing process with pid 284851 00:17:49.820 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 284851 00:17:49.820 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 284851 00:17:50.079 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:17:50.079 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:17:50.079 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:17:50.079 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # iptr 00:17:50.079 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-save 00:17:50.079 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:17:50.079 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@791 -- # iptables-restore 00:17:50.079 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:17:50.079 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # remove_spdk_ns 00:17:50.079 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:50.079 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:50.079 05:18:03 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:17:52.618 00:17:52.618 real 0m10.541s 00:17:52.618 user 0m5.092s 00:17:52.618 sys 0m5.404s 00:17:52.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:52.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:17:52.618 ************************************ 00:17:52.618 END TEST nvmf_fused_ordering 00:17:52.618 ************************************ 00:17:52.618 05:18:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:17:52.618 05:18:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:52.618 05:18:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.618 05:18:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:52.618 ************************************ 00:17:52.618 START TEST nvmf_ns_masking 00:17:52.618 ************************************ 00:17:52.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:17:52.618 * Looking for test storage... 00:17:52.618 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:52.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:52.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lcov --version 00:17:52.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:52.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:52.618 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:52.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.619 --rc genhtml_branch_coverage=1 00:17:52.619 --rc genhtml_function_coverage=1 00:17:52.619 --rc genhtml_legend=1 00:17:52.619 --rc geninfo_all_blocks=1 00:17:52.619 --rc geninfo_unexecuted_blocks=1 00:17:52.619 00:17:52.619 ' 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:52.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.619 --rc genhtml_branch_coverage=1 00:17:52.619 --rc genhtml_function_coverage=1 00:17:52.619 --rc genhtml_legend=1 00:17:52.619 --rc geninfo_all_blocks=1 00:17:52.619 --rc geninfo_unexecuted_blocks=1 00:17:52.619 00:17:52.619 ' 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:52.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.619 --rc genhtml_branch_coverage=1 00:17:52.619 --rc genhtml_function_coverage=1 00:17:52.619 --rc genhtml_legend=1 00:17:52.619 --rc geninfo_all_blocks=1 00:17:52.619 --rc geninfo_unexecuted_blocks=1 00:17:52.619 00:17:52.619 ' 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:52.619 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.619 --rc genhtml_branch_coverage=1 00:17:52.619 --rc genhtml_function_coverage=1 00:17:52.619 --rc genhtml_legend=1 00:17:52.619 --rc geninfo_all_blocks=1 00:17:52.619 --rc geninfo_unexecuted_blocks=1 00:17:52.619 00:17:52.619 ' 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # : 0 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:17:52.619 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@55 -- # have_pci_nics=0 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:17:52.619 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=c957db42-e72b-4264-9f6e-371797e31dc8 00:17:52.620 05:18:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=f53c0f62-7289-40c2-8e80-d7112e409240 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=4208591b-1434-44d6-a58b-ec76a1813670 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@476 -- # prepare_net_devs 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@438 -- # local -g is_hw=no 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@440 -- # remove_spdk_ns 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # xtrace_disable 00:17:52.620 05:18:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # pci_devs=() 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # local -a pci_devs 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # pci_net_devs=() 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # pci_drivers=() 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # local -A pci_drivers 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # net_devs=() 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@319 -- # local -ga net_devs 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # e810=() 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # local -ga e810 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # x722=() 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # local -ga x722 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # mlx=() 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@322 -- # local -ga mlx 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:17:59.195 Found 0000:af:00.0 (0x8086 - 0x159b) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:17:59.195 Found 0000:af:00.1 (0x8086 - 0x159b) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:17:59.195 Found net devices under 0000:af:00.0: cvl_0_0 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # [[ up == up ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:17:59.195 Found net devices under 0000:af:00.1: cvl_0_1 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@442 -- # is_hw=yes 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:17:59.195 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:17:59.196 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:59.196 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.369 ms 00:17:59.196 00:17:59.196 --- 10.0.0.2 ping statistics --- 00:17:59.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.196 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:59.196 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:59.196 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.193 ms 00:17:59.196 00:17:59.196 --- 10.0.0.1 ping statistics --- 00:17:59.196 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:59.196 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # return 0 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@509 -- # nvmfpid=289197 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@510 -- # waitforlisten 289197 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 289197 ']' 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.196 05:18:11 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:59.196 [2024-12-15 05:18:11.941540] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:17:59.196 [2024-12-15 05:18:11.941584] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.196 [2024-12-15 05:18:12.016937] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.196 [2024-12-15 05:18:12.038512] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.196 [2024-12-15 05:18:12.038546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.196 [2024-12-15 05:18:12.038556] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.196 [2024-12-15 05:18:12.038562] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.196 [2024-12-15 05:18:12.038567] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.196 [2024-12-15 05:18:12.039053] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:59.196 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.196 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:17:59.196 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:17:59.196 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.196 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:17:59.196 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.196 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:17:59.196 [2024-12-15 05:18:12.330691] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.196 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:17:59.196 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:17:59.196 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:17:59.196 Malloc1 00:17:59.196 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:17:59.196 Malloc2 00:17:59.196 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:17:59.455 05:18:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:17:59.714 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:59.714 [2024-12-15 05:18:13.349065] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.714 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:17:59.714 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4208591b-1434-44d6-a58b-ec76a1813670 -a 10.0.0.2 -s 4420 -i 4 00:17:59.972 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:17:59.972 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:17:59.972 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:17:59.972 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:17:59.972 05:18:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:02.509 [ 0]:0x1 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=db1f9db2b4204aea9766b7418bb0f6fc 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ db1f9db2b4204aea9766b7418bb0f6fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:02.509 [ 0]:0x1 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=db1f9db2b4204aea9766b7418bb0f6fc 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ db1f9db2b4204aea9766b7418bb0f6fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:02.509 05:18:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:02.509 [ 1]:0x2 00:18:02.509 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:02.509 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:02.509 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca2b94712825461aa0e535bb5e9d2331 00:18:02.509 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca2b94712825461aa0e535bb5e9d2331 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:02.509 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:18:02.509 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:02.769 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:02.769 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:03.028 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:18:03.028 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:18:03.028 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4208591b-1434-44d6-a58b-ec76a1813670 -a 10.0.0.2 -s 4420 -i 4 00:18:03.287 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:18:03.287 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:03.287 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:03.287 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:18:03.287 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:18:03.287 05:18:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:05.823 05:18:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.823 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:05.823 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:05.824 [ 0]:0x2 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca2b94712825461aa0e535bb5e9d2331 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca2b94712825461aa0e535bb5e9d2331 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:05.824 [ 0]:0x1 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=db1f9db2b4204aea9766b7418bb0f6fc 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ db1f9db2b4204aea9766b7418bb0f6fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:05.824 [ 1]:0x2 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca2b94712825461aa0e535bb5e9d2331 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca2b94712825461aa0e535bb5e9d2331 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:05.824 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:06.083 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:06.084 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:06.084 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:06.084 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:06.084 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:18:06.084 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:06.084 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:06.084 [ 0]:0x2 00:18:06.084 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:06.084 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:06.084 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca2b94712825461aa0e535bb5e9d2331 00:18:06.084 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca2b94712825461aa0e535bb5e9d2331 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:06.084 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:18:06.084 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:06.084 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:06.084 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:06.343 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:18:06.343 05:18:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 4208591b-1434-44d6-a58b-ec76a1813670 -a 10.0.0.2 -s 4420 -i 4 00:18:06.603 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:06.603 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:18:06.603 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:06.603 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:06.603 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:06.603 05:18:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:18:08.514 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:08.514 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:08.514 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:08.514 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:08.514 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:08.514 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:18:08.514 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:18:08.514 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:18:08.774 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:18:08.774 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:18:08.774 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:18:08.774 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.774 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:08.774 [ 0]:0x1 00:18:08.774 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:08.774 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:08.774 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=db1f9db2b4204aea9766b7418bb0f6fc 00:18:08.774 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ db1f9db2b4204aea9766b7418bb0f6fc != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:08.774 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:18:08.774 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:08.774 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:09.033 [ 1]:0x2 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca2b94712825461aa0e535bb5e9d2331 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca2b94712825461aa0e535bb5e9d2331 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:09.033 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:09.293 [ 0]:0x2 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca2b94712825461aa0e535bb5e9d2331 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca2b94712825461aa0e535bb5e9d2331 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.293 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.294 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.294 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.294 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.294 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.294 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:09.294 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:09.294 05:18:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:18:09.553 [2024-12-15 05:18:22.996304] nvmf_rpc.c:1873:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:18:09.553 request: 00:18:09.553 { 00:18:09.553 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:09.553 "nsid": 2, 00:18:09.553 "host": "nqn.2016-06.io.spdk:host1", 00:18:09.553 "method": "nvmf_ns_remove_host", 00:18:09.553 "req_id": 1 00:18:09.553 } 00:18:09.553 Got JSON-RPC error response 00:18:09.553 response: 00:18:09.553 { 00:18:09.553 "code": -32602, 00:18:09.553 "message": "Invalid parameters" 00:18:09.553 } 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:18:09.553 [ 0]:0x2 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:18:09.553 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=ca2b94712825461aa0e535bb5e9d2331 00:18:09.554 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ ca2b94712825461aa0e535bb5e9d2331 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:18:09.554 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:18:09.554 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:09.554 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:09.554 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=291144 00:18:09.554 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:18:09.554 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 291144 /var/tmp/host.sock 00:18:09.554 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:18:09.554 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 291144 ']' 00:18:09.554 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:18:09.554 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:09.554 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:18:09.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:18:09.554 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:09.554 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:09.554 [2024-12-15 05:18:23.215232] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:09.554 [2024-12-15 05:18:23.215286] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid291144 ] 00:18:09.813 [2024-12-15 05:18:23.289315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:09.813 [2024-12-15 05:18:23.311525] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:10.071 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:10.071 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:18:10.071 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:10.071 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:10.330 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid c957db42-e72b-4264-9f6e-371797e31dc8 00:18:10.330 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:10.330 05:18:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C957DB42E72B42649F6E371797E31DC8 -i 00:18:10.589 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid f53c0f62-7289-40c2-8e80-d7112e409240 00:18:10.589 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:10.589 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g F53C0F62728940C28E80D7112E409240 -i 00:18:10.848 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:18:10.849 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:18:11.108 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:11.108 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:18:11.367 nvme0n1 00:18:11.367 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:11.367 05:18:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:18:11.627 nvme1n2 00:18:11.627 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:18:11.627 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:18:11.627 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:18:11.627 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:11.627 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:18:11.887 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:18:11.887 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:18:11.887 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:18:11.887 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:18:12.145 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ c957db42-e72b-4264-9f6e-371797e31dc8 == \c\9\5\7\d\b\4\2\-\e\7\2\b\-\4\2\6\4\-\9\f\6\e\-\3\7\1\7\9\7\e\3\1\d\c\8 ]] 00:18:12.145 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:18:12.145 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:18:12.145 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:18:12.404 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ f53c0f62-7289-40c2-8e80-d7112e409240 == \f\5\3\c\0\f\6\2\-\7\2\8\9\-\4\0\c\2\-\8\e\8\0\-\d\7\1\1\2\e\4\0\9\2\4\0 ]] 00:18:12.404 05:18:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:18:12.405 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:18:12.663 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid c957db42-e72b-4264-9f6e-371797e31dc8 00:18:12.663 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:12.663 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C957DB42E72B42649F6E371797E31DC8 00:18:12.663 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:18:12.663 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C957DB42E72B42649F6E371797E31DC8 00:18:12.663 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.663 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.663 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.663 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.663 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.663 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.663 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:12.664 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:18:12.664 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g C957DB42E72B42649F6E371797E31DC8 00:18:12.922 [2024-12-15 05:18:26.425760] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:18:12.923 [2024-12-15 05:18:26.425789] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:18:12.923 [2024-12-15 05:18:26.425797] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:18:12.923 request: 00:18:12.923 { 00:18:12.923 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.923 "namespace": { 00:18:12.923 "bdev_name": "invalid", 00:18:12.923 "nsid": 1, 00:18:12.923 "nguid": "C957DB42E72B42649F6E371797E31DC8", 00:18:12.923 "no_auto_visible": false, 00:18:12.923 "hide_metadata": false 00:18:12.923 }, 00:18:12.923 "method": "nvmf_subsystem_add_ns", 00:18:12.923 "req_id": 1 00:18:12.923 } 00:18:12.923 Got JSON-RPC error response 00:18:12.923 response: 00:18:12.923 { 00:18:12.923 "code": -32602, 00:18:12.923 "message": "Invalid parameters" 00:18:12.923 } 00:18:12.923 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:18:12.923 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.923 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.923 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.923 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid c957db42-e72b-4264-9f6e-371797e31dc8 00:18:12.923 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@787 -- # tr -d - 00:18:12.923 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g C957DB42E72B42649F6E371797E31DC8 -i 00:18:13.182 05:18:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:18:15.087 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:18:15.087 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:18:15.087 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:18:15.347 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:18:15.347 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 291144 00:18:15.347 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 291144 ']' 00:18:15.347 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 291144 00:18:15.347 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:15.347 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.347 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 291144 00:18:15.347 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:15.347 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:15.347 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 291144' 00:18:15.347 killing process with pid 291144 00:18:15.347 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 291144 00:18:15.347 05:18:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 291144 00:18:15.606 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # sync 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set +e 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:15.865 rmmod nvme_tcp 00:18:15.865 rmmod nvme_fabrics 00:18:15.865 rmmod nvme_keyring 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@128 -- # set -e 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # return 0 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@517 -- # '[' -n 289197 ']' 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@518 -- # killprocess 289197 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 289197 ']' 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 289197 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 289197 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 289197' 00:18:15.865 killing process with pid 289197 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 289197 00:18:15.865 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 289197 00:18:16.125 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:16.125 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:16.125 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:16.125 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # iptr 00:18:16.125 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-save 00:18:16.125 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:16.125 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@791 -- # iptables-restore 00:18:16.125 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:16.125 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:16.125 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:16.125 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:16.125 05:18:29 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.664 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:18.664 00:18:18.664 real 0m26.032s 00:18:18.664 user 0m31.015s 00:18:18.664 sys 0m6.966s 00:18:18.664 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:18.664 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:18:18.664 ************************************ 00:18:18.664 END TEST nvmf_ns_masking 00:18:18.664 ************************************ 00:18:18.664 05:18:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:18:18.664 05:18:31 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:18.664 05:18:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:18.664 05:18:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:18.664 05:18:31 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:18.664 ************************************ 00:18:18.664 START TEST nvmf_nvme_cli 00:18:18.664 ************************************ 00:18:18.664 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:18:18.664 * Looking for test storage... 00:18:18.664 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:18.664 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:18.664 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lcov --version 00:18:18.664 05:18:31 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:18.664 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:18.664 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.664 --rc genhtml_branch_coverage=1 00:18:18.664 --rc genhtml_function_coverage=1 00:18:18.664 --rc genhtml_legend=1 00:18:18.664 --rc geninfo_all_blocks=1 00:18:18.665 --rc geninfo_unexecuted_blocks=1 00:18:18.665 00:18:18.665 ' 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:18.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.665 --rc genhtml_branch_coverage=1 00:18:18.665 --rc genhtml_function_coverage=1 00:18:18.665 --rc genhtml_legend=1 00:18:18.665 --rc geninfo_all_blocks=1 00:18:18.665 --rc geninfo_unexecuted_blocks=1 00:18:18.665 00:18:18.665 ' 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:18.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.665 --rc genhtml_branch_coverage=1 00:18:18.665 --rc genhtml_function_coverage=1 00:18:18.665 --rc genhtml_legend=1 00:18:18.665 --rc geninfo_all_blocks=1 00:18:18.665 --rc geninfo_unexecuted_blocks=1 00:18:18.665 00:18:18.665 ' 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:18.665 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:18.665 --rc genhtml_branch_coverage=1 00:18:18.665 --rc genhtml_function_coverage=1 00:18:18.665 --rc genhtml_legend=1 00:18:18.665 --rc geninfo_all_blocks=1 00:18:18.665 --rc geninfo_unexecuted_blocks=1 00:18:18.665 00:18:18.665 ' 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # : 0 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:18.665 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@476 -- # prepare_net_devs 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@438 -- # local -g is_hw=no 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@440 -- # remove_spdk_ns 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # xtrace_disable 00:18:18.665 05:18:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # pci_devs=() 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # local -a pci_devs 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # pci_net_devs=() 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # pci_drivers=() 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # local -A pci_drivers 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # net_devs=() 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@319 -- # local -ga net_devs 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # e810=() 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # local -ga e810 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # x722=() 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # local -ga x722 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # mlx=() 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@322 -- # local -ga mlx 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:25.241 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:18:25.242 Found 0000:af:00.0 (0x8086 - 0x159b) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:18:25.242 Found 0000:af:00.1 (0x8086 - 0x159b) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:18:25.242 Found net devices under 0000:af:00.0: cvl_0_0 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # [[ up == up ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:18:25.242 Found net devices under 0000:af:00.1: cvl_0_1 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@442 -- # is_hw=yes 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:18:25.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:25.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:18:25.242 00:18:25.242 --- 10.0.0.2 ping statistics --- 00:18:25.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.242 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:25.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:25.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:18:25.242 00:18:25.242 --- 10.0.0.1 ping statistics --- 00:18:25.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:25.242 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # return 0 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:25.242 05:18:37 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.242 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@509 -- # nvmfpid=295637 00:18:25.242 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@510 -- # waitforlisten 295637 00:18:25.242 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:18:25.242 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 295637 ']' 00:18:25.242 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.242 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:25.242 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.242 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.242 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:25.242 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.243 [2024-12-15 05:18:38.051908] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:25.243 [2024-12-15 05:18:38.051956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.243 [2024-12-15 05:18:38.131454] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:25.243 [2024-12-15 05:18:38.156119] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:25.243 [2024-12-15 05:18:38.156169] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:25.243 [2024-12-15 05:18:38.156178] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:25.243 [2024-12-15 05:18:38.156186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:25.243 [2024-12-15 05:18:38.156192] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:25.243 [2024-12-15 05:18:38.157515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:25.243 [2024-12-15 05:18:38.157625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:25.243 [2024-12-15 05:18:38.157730] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:25.243 [2024-12-15 05:18:38.157732] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.243 [2024-12-15 05:18:38.298257] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.243 Malloc0 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.243 Malloc1 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.243 [2024-12-15 05:18:38.396548] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:18:25.243 00:18:25.243 Discovery Log Number of Records 2, Generation counter 2 00:18:25.243 =====Discovery Log Entry 0====== 00:18:25.243 trtype: tcp 00:18:25.243 adrfam: ipv4 00:18:25.243 subtype: current discovery subsystem 00:18:25.243 treq: not required 00:18:25.243 portid: 0 00:18:25.243 trsvcid: 4420 00:18:25.243 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:18:25.243 traddr: 10.0.0.2 00:18:25.243 eflags: explicit discovery connections, duplicate discovery information 00:18:25.243 sectype: none 00:18:25.243 =====Discovery Log Entry 1====== 00:18:25.243 trtype: tcp 00:18:25.243 adrfam: ipv4 00:18:25.243 subtype: nvme subsystem 00:18:25.243 treq: not required 00:18:25.243 portid: 0 00:18:25.243 trsvcid: 4420 00:18:25.243 subnqn: nqn.2016-06.io.spdk:cnode1 00:18:25.243 traddr: 10.0.0.2 00:18:25.243 eflags: none 00:18:25.243 sectype: none 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:18:25.243 05:18:38 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:18:26.182 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:18:26.182 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:18:26.182 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:18:26.182 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:18:26.182 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:18:26.182 05:18:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:18:28.721 /dev/nvme0n2 ]] 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@550 -- # local dev _ 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@549 -- # nvme list 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ Node == /dev/nvme* ]] 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ --------------------- == /dev/nvme* ]] 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n1 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@553 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@554 -- # echo /dev/nvme0n2 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@552 -- # read -r dev _ 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:18:28.721 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@516 -- # nvmfcleanup 00:18:28.721 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # sync 00:18:28.722 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:18:28.722 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set +e 00:18:28.722 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # for i in {1..20} 00:18:28.722 05:18:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:18:28.722 rmmod nvme_tcp 00:18:28.722 rmmod nvme_fabrics 00:18:28.722 rmmod nvme_keyring 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@128 -- # set -e 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # return 0 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@517 -- # '[' -n 295637 ']' 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@518 -- # killprocess 295637 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 295637 ']' 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 295637 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 295637 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 295637' 00:18:28.722 killing process with pid 295637 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 295637 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 295637 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # iptr 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-save 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@791 -- # iptables-restore 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # remove_spdk_ns 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:28.722 05:18:42 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:31.266 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:18:31.266 00:18:31.266 real 0m12.499s 00:18:31.266 user 0m18.137s 00:18:31.266 sys 0m5.056s 00:18:31.266 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:18:31.267 ************************************ 00:18:31.267 END TEST nvmf_nvme_cli 00:18:31.267 ************************************ 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:31.267 ************************************ 00:18:31.267 START TEST nvmf_vfio_user 00:18:31.267 ************************************ 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:18:31.267 * Looking for test storage... 00:18:31.267 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lcov --version 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:18:31.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.267 --rc genhtml_branch_coverage=1 00:18:31.267 --rc genhtml_function_coverage=1 00:18:31.267 --rc genhtml_legend=1 00:18:31.267 --rc geninfo_all_blocks=1 00:18:31.267 --rc geninfo_unexecuted_blocks=1 00:18:31.267 00:18:31.267 ' 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:18:31.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.267 --rc genhtml_branch_coverage=1 00:18:31.267 --rc genhtml_function_coverage=1 00:18:31.267 --rc genhtml_legend=1 00:18:31.267 --rc geninfo_all_blocks=1 00:18:31.267 --rc geninfo_unexecuted_blocks=1 00:18:31.267 00:18:31.267 ' 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:18:31.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.267 --rc genhtml_branch_coverage=1 00:18:31.267 --rc genhtml_function_coverage=1 00:18:31.267 --rc genhtml_legend=1 00:18:31.267 --rc geninfo_all_blocks=1 00:18:31.267 --rc geninfo_unexecuted_blocks=1 00:18:31.267 00:18:31.267 ' 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:18:31.267 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:31.267 --rc genhtml_branch_coverage=1 00:18:31.267 --rc genhtml_function_coverage=1 00:18:31.267 --rc genhtml_legend=1 00:18:31.267 --rc geninfo_all_blocks=1 00:18:31.267 --rc geninfo_unexecuted_blocks=1 00:18:31.267 00:18:31.267 ' 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # : 0 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:18:31.267 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:18:31.268 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@55 -- # have_pci_nics=0 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=296813 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 296813' 00:18:31.268 Process pid: 296813 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 296813 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 296813 ']' 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:18:31.268 [2024-12-15 05:18:44.706597] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:31.268 [2024-12-15 05:18:44.706642] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.268 [2024-12-15 05:18:44.779578] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:31.268 [2024-12-15 05:18:44.802857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.268 [2024-12-15 05:18:44.802893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.268 [2024-12-15 05:18:44.802900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.268 [2024-12-15 05:18:44.802909] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.268 [2024-12-15 05:18:44.802914] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.268 [2024-12-15 05:18:44.804263] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:18:31.268 [2024-12-15 05:18:44.804291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:18:31.268 [2024-12-15 05:18:44.804398] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.268 [2024-12-15 05:18:44.804399] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:18:31.268 05:18:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:18:32.213 05:18:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:18:32.471 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:18:32.471 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:18:32.471 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:32.471 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:18:32.471 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:18:32.730 Malloc1 00:18:32.730 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:18:32.989 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:18:33.248 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:18:33.508 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:33.508 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:18:33.508 05:18:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:18:33.508 Malloc2 00:18:33.508 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:18:33.767 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:18:34.026 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:18:34.287 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:18:34.287 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:18:34.287 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:34.287 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:34.287 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:18:34.287 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:34.287 [2024-12-15 05:18:47.769280] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:34.287 [2024-12-15 05:18:47.769312] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid297388 ] 00:18:34.287 [2024-12-15 05:18:47.812533] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:18:34.287 [2024-12-15 05:18:47.820317] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:34.287 [2024-12-15 05:18:47.820338] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f93c9e86000 00:18:34.287 [2024-12-15 05:18:47.821313] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:34.287 [2024-12-15 05:18:47.822308] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:34.287 [2024-12-15 05:18:47.823318] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:34.287 [2024-12-15 05:18:47.824323] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:34.287 [2024-12-15 05:18:47.825330] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:34.287 [2024-12-15 05:18:47.826334] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:34.287 [2024-12-15 05:18:47.827339] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:34.287 [2024-12-15 05:18:47.828349] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:34.287 [2024-12-15 05:18:47.829359] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:34.287 [2024-12-15 05:18:47.829370] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f93c8b8f000 00:18:34.287 [2024-12-15 05:18:47.830278] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:34.287 [2024-12-15 05:18:47.840654] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:18:34.287 [2024-12-15 05:18:47.840680] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:18:34.287 [2024-12-15 05:18:47.845459] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:34.287 [2024-12-15 05:18:47.845496] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:34.287 [2024-12-15 05:18:47.845568] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:18:34.287 [2024-12-15 05:18:47.845584] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:18:34.287 [2024-12-15 05:18:47.845590] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:18:34.287 [2024-12-15 05:18:47.846461] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:18:34.287 [2024-12-15 05:18:47.846470] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:18:34.287 [2024-12-15 05:18:47.846476] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:18:34.287 [2024-12-15 05:18:47.847464] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:18:34.287 [2024-12-15 05:18:47.847472] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:18:34.287 [2024-12-15 05:18:47.847479] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:34.287 [2024-12-15 05:18:47.848468] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:18:34.287 [2024-12-15 05:18:47.848478] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:34.287 [2024-12-15 05:18:47.849479] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:18:34.287 [2024-12-15 05:18:47.849487] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:34.287 [2024-12-15 05:18:47.849492] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:34.287 [2024-12-15 05:18:47.849498] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:34.287 [2024-12-15 05:18:47.849606] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:18:34.287 [2024-12-15 05:18:47.849610] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:34.287 [2024-12-15 05:18:47.849615] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:18:34.287 [2024-12-15 05:18:47.850484] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:18:34.288 [2024-12-15 05:18:47.851487] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:18:34.288 [2024-12-15 05:18:47.852499] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:34.288 [2024-12-15 05:18:47.853500] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:34.288 [2024-12-15 05:18:47.853561] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:34.288 [2024-12-15 05:18:47.854510] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:18:34.288 [2024-12-15 05:18:47.854518] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:34.288 [2024-12-15 05:18:47.854523] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854540] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:18:34.288 [2024-12-15 05:18:47.854551] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854562] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:34.288 [2024-12-15 05:18:47.854568] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:34.288 [2024-12-15 05:18:47.854571] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:34.288 [2024-12-15 05:18:47.854584] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:34.288 [2024-12-15 05:18:47.854626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:34.288 [2024-12-15 05:18:47.854634] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:18:34.288 [2024-12-15 05:18:47.854639] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:18:34.288 [2024-12-15 05:18:47.854643] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:18:34.288 [2024-12-15 05:18:47.854647] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:34.288 [2024-12-15 05:18:47.854651] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:18:34.288 [2024-12-15 05:18:47.854655] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:18:34.288 [2024-12-15 05:18:47.854660] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854668] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854679] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:34.288 [2024-12-15 05:18:47.854690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:34.288 [2024-12-15 05:18:47.854700] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.288 [2024-12-15 05:18:47.854708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.288 [2024-12-15 05:18:47.854715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.288 [2024-12-15 05:18:47.854722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:34.288 [2024-12-15 05:18:47.854726] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854733] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854742] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:34.288 [2024-12-15 05:18:47.854750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:34.288 [2024-12-15 05:18:47.854755] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:18:34.288 [2024-12-15 05:18:47.854760] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854768] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854774] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854782] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:34.288 [2024-12-15 05:18:47.854796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:34.288 [2024-12-15 05:18:47.854843] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854852] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854859] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:34.288 [2024-12-15 05:18:47.854863] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:34.288 [2024-12-15 05:18:47.854866] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:34.288 [2024-12-15 05:18:47.854872] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:34.288 [2024-12-15 05:18:47.854883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:34.288 [2024-12-15 05:18:47.854891] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:18:34.288 [2024-12-15 05:18:47.854902] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854909] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854915] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:34.288 [2024-12-15 05:18:47.854919] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:34.288 [2024-12-15 05:18:47.854922] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:34.288 [2024-12-15 05:18:47.854927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:34.288 [2024-12-15 05:18:47.854950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:34.288 [2024-12-15 05:18:47.854962] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.854975] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:34.288 [2024-12-15 05:18:47.854979] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:34.288 [2024-12-15 05:18:47.854982] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:34.288 [2024-12-15 05:18:47.854987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:34.288 [2024-12-15 05:18:47.855007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:34.288 [2024-12-15 05:18:47.855016] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.855022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.855030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.855035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.855040] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.855045] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.855049] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:34.288 [2024-12-15 05:18:47.855053] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:18:34.288 [2024-12-15 05:18:47.855058] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:18:34.288 [2024-12-15 05:18:47.855075] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:34.288 [2024-12-15 05:18:47.855083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:34.288 [2024-12-15 05:18:47.855094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:34.288 [2024-12-15 05:18:47.855104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:34.288 [2024-12-15 05:18:47.855114] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:34.288 [2024-12-15 05:18:47.855124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:34.288 [2024-12-15 05:18:47.855134] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:34.288 [2024-12-15 05:18:47.855140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:34.288 [2024-12-15 05:18:47.855152] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:34.289 [2024-12-15 05:18:47.855156] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:34.289 [2024-12-15 05:18:47.855159] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:34.289 [2024-12-15 05:18:47.855162] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:34.289 [2024-12-15 05:18:47.855165] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:34.289 [2024-12-15 05:18:47.855171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:34.289 [2024-12-15 05:18:47.855177] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:34.289 [2024-12-15 05:18:47.855181] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:34.289 [2024-12-15 05:18:47.855184] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:34.289 [2024-12-15 05:18:47.855190] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:34.289 [2024-12-15 05:18:47.855196] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:34.289 [2024-12-15 05:18:47.855200] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:34.289 [2024-12-15 05:18:47.855203] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:34.289 [2024-12-15 05:18:47.855208] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:34.289 [2024-12-15 05:18:47.855214] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:34.289 [2024-12-15 05:18:47.855218] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:34.289 [2024-12-15 05:18:47.855221] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:34.289 [2024-12-15 05:18:47.855226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:34.289 [2024-12-15 05:18:47.855232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:34.289 [2024-12-15 05:18:47.855244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:34.289 [2024-12-15 05:18:47.855253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:34.289 [2024-12-15 05:18:47.855260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:34.289 ===================================================== 00:18:34.289 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:34.289 ===================================================== 00:18:34.289 Controller Capabilities/Features 00:18:34.289 ================================ 00:18:34.289 Vendor ID: 4e58 00:18:34.289 Subsystem Vendor ID: 4e58 00:18:34.289 Serial Number: SPDK1 00:18:34.289 Model Number: SPDK bdev Controller 00:18:34.289 Firmware Version: 25.01 00:18:34.289 Recommended Arb Burst: 6 00:18:34.289 IEEE OUI Identifier: 8d 6b 50 00:18:34.289 Multi-path I/O 00:18:34.289 May have multiple subsystem ports: Yes 00:18:34.289 May have multiple controllers: Yes 00:18:34.289 Associated with SR-IOV VF: No 00:18:34.289 Max Data Transfer Size: 131072 00:18:34.289 Max Number of Namespaces: 32 00:18:34.289 Max Number of I/O Queues: 127 00:18:34.289 NVMe Specification Version (VS): 1.3 00:18:34.289 NVMe Specification Version (Identify): 1.3 00:18:34.289 Maximum Queue Entries: 256 00:18:34.289 Contiguous Queues Required: Yes 00:18:34.289 Arbitration Mechanisms Supported 00:18:34.289 Weighted Round Robin: Not Supported 00:18:34.289 Vendor Specific: Not Supported 00:18:34.289 Reset Timeout: 15000 ms 00:18:34.289 Doorbell Stride: 4 bytes 00:18:34.289 NVM Subsystem Reset: Not Supported 00:18:34.289 Command Sets Supported 00:18:34.289 NVM Command Set: Supported 00:18:34.289 Boot Partition: Not Supported 00:18:34.289 Memory Page Size Minimum: 4096 bytes 00:18:34.289 Memory Page Size Maximum: 4096 bytes 00:18:34.289 Persistent Memory Region: Not Supported 00:18:34.289 Optional Asynchronous Events Supported 00:18:34.289 Namespace Attribute Notices: Supported 00:18:34.289 Firmware Activation Notices: Not Supported 00:18:34.289 ANA Change Notices: Not Supported 00:18:34.289 PLE Aggregate Log Change Notices: Not Supported 00:18:34.289 LBA Status Info Alert Notices: Not Supported 00:18:34.289 EGE Aggregate Log Change Notices: Not Supported 00:18:34.289 Normal NVM Subsystem Shutdown event: Not Supported 00:18:34.289 Zone Descriptor Change Notices: Not Supported 00:18:34.289 Discovery Log Change Notices: Not Supported 00:18:34.289 Controller Attributes 00:18:34.289 128-bit Host Identifier: Supported 00:18:34.289 Non-Operational Permissive Mode: Not Supported 00:18:34.289 NVM Sets: Not Supported 00:18:34.289 Read Recovery Levels: Not Supported 00:18:34.289 Endurance Groups: Not Supported 00:18:34.289 Predictable Latency Mode: Not Supported 00:18:34.289 Traffic Based Keep ALive: Not Supported 00:18:34.289 Namespace Granularity: Not Supported 00:18:34.289 SQ Associations: Not Supported 00:18:34.289 UUID List: Not Supported 00:18:34.289 Multi-Domain Subsystem: Not Supported 00:18:34.289 Fixed Capacity Management: Not Supported 00:18:34.289 Variable Capacity Management: Not Supported 00:18:34.289 Delete Endurance Group: Not Supported 00:18:34.289 Delete NVM Set: Not Supported 00:18:34.289 Extended LBA Formats Supported: Not Supported 00:18:34.289 Flexible Data Placement Supported: Not Supported 00:18:34.289 00:18:34.289 Controller Memory Buffer Support 00:18:34.289 ================================ 00:18:34.289 Supported: No 00:18:34.289 00:18:34.289 Persistent Memory Region Support 00:18:34.289 ================================ 00:18:34.289 Supported: No 00:18:34.289 00:18:34.289 Admin Command Set Attributes 00:18:34.289 ============================ 00:18:34.289 Security Send/Receive: Not Supported 00:18:34.289 Format NVM: Not Supported 00:18:34.289 Firmware Activate/Download: Not Supported 00:18:34.289 Namespace Management: Not Supported 00:18:34.289 Device Self-Test: Not Supported 00:18:34.289 Directives: Not Supported 00:18:34.289 NVMe-MI: Not Supported 00:18:34.289 Virtualization Management: Not Supported 00:18:34.289 Doorbell Buffer Config: Not Supported 00:18:34.289 Get LBA Status Capability: Not Supported 00:18:34.289 Command & Feature Lockdown Capability: Not Supported 00:18:34.289 Abort Command Limit: 4 00:18:34.289 Async Event Request Limit: 4 00:18:34.289 Number of Firmware Slots: N/A 00:18:34.289 Firmware Slot 1 Read-Only: N/A 00:18:34.289 Firmware Activation Without Reset: N/A 00:18:34.289 Multiple Update Detection Support: N/A 00:18:34.289 Firmware Update Granularity: No Information Provided 00:18:34.289 Per-Namespace SMART Log: No 00:18:34.289 Asymmetric Namespace Access Log Page: Not Supported 00:18:34.289 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:18:34.289 Command Effects Log Page: Supported 00:18:34.289 Get Log Page Extended Data: Supported 00:18:34.289 Telemetry Log Pages: Not Supported 00:18:34.289 Persistent Event Log Pages: Not Supported 00:18:34.289 Supported Log Pages Log Page: May Support 00:18:34.289 Commands Supported & Effects Log Page: Not Supported 00:18:34.289 Feature Identifiers & Effects Log Page:May Support 00:18:34.289 NVMe-MI Commands & Effects Log Page: May Support 00:18:34.289 Data Area 4 for Telemetry Log: Not Supported 00:18:34.289 Error Log Page Entries Supported: 128 00:18:34.289 Keep Alive: Supported 00:18:34.289 Keep Alive Granularity: 10000 ms 00:18:34.289 00:18:34.289 NVM Command Set Attributes 00:18:34.289 ========================== 00:18:34.289 Submission Queue Entry Size 00:18:34.289 Max: 64 00:18:34.289 Min: 64 00:18:34.289 Completion Queue Entry Size 00:18:34.289 Max: 16 00:18:34.289 Min: 16 00:18:34.289 Number of Namespaces: 32 00:18:34.289 Compare Command: Supported 00:18:34.289 Write Uncorrectable Command: Not Supported 00:18:34.289 Dataset Management Command: Supported 00:18:34.289 Write Zeroes Command: Supported 00:18:34.289 Set Features Save Field: Not Supported 00:18:34.289 Reservations: Not Supported 00:18:34.289 Timestamp: Not Supported 00:18:34.289 Copy: Supported 00:18:34.289 Volatile Write Cache: Present 00:18:34.289 Atomic Write Unit (Normal): 1 00:18:34.289 Atomic Write Unit (PFail): 1 00:18:34.289 Atomic Compare & Write Unit: 1 00:18:34.289 Fused Compare & Write: Supported 00:18:34.289 Scatter-Gather List 00:18:34.289 SGL Command Set: Supported (Dword aligned) 00:18:34.289 SGL Keyed: Not Supported 00:18:34.289 SGL Bit Bucket Descriptor: Not Supported 00:18:34.289 SGL Metadata Pointer: Not Supported 00:18:34.289 Oversized SGL: Not Supported 00:18:34.289 SGL Metadata Address: Not Supported 00:18:34.289 SGL Offset: Not Supported 00:18:34.289 Transport SGL Data Block: Not Supported 00:18:34.289 Replay Protected Memory Block: Not Supported 00:18:34.289 00:18:34.289 Firmware Slot Information 00:18:34.289 ========================= 00:18:34.289 Active slot: 1 00:18:34.289 Slot 1 Firmware Revision: 25.01 00:18:34.289 00:18:34.289 00:18:34.289 Commands Supported and Effects 00:18:34.289 ============================== 00:18:34.289 Admin Commands 00:18:34.289 -------------- 00:18:34.289 Get Log Page (02h): Supported 00:18:34.289 Identify (06h): Supported 00:18:34.289 Abort (08h): Supported 00:18:34.289 Set Features (09h): Supported 00:18:34.289 Get Features (0Ah): Supported 00:18:34.289 Asynchronous Event Request (0Ch): Supported 00:18:34.290 Keep Alive (18h): Supported 00:18:34.290 I/O Commands 00:18:34.290 ------------ 00:18:34.290 Flush (00h): Supported LBA-Change 00:18:34.290 Write (01h): Supported LBA-Change 00:18:34.290 Read (02h): Supported 00:18:34.290 Compare (05h): Supported 00:18:34.290 Write Zeroes (08h): Supported LBA-Change 00:18:34.290 Dataset Management (09h): Supported LBA-Change 00:18:34.290 Copy (19h): Supported LBA-Change 00:18:34.290 00:18:34.290 Error Log 00:18:34.290 ========= 00:18:34.290 00:18:34.290 Arbitration 00:18:34.290 =========== 00:18:34.290 Arbitration Burst: 1 00:18:34.290 00:18:34.290 Power Management 00:18:34.290 ================ 00:18:34.290 Number of Power States: 1 00:18:34.290 Current Power State: Power State #0 00:18:34.290 Power State #0: 00:18:34.290 Max Power: 0.00 W 00:18:34.290 Non-Operational State: Operational 00:18:34.290 Entry Latency: Not Reported 00:18:34.290 Exit Latency: Not Reported 00:18:34.290 Relative Read Throughput: 0 00:18:34.290 Relative Read Latency: 0 00:18:34.290 Relative Write Throughput: 0 00:18:34.290 Relative Write Latency: 0 00:18:34.290 Idle Power: Not Reported 00:18:34.290 Active Power: Not Reported 00:18:34.290 Non-Operational Permissive Mode: Not Supported 00:18:34.290 00:18:34.290 Health Information 00:18:34.290 ================== 00:18:34.290 Critical Warnings: 00:18:34.290 Available Spare Space: OK 00:18:34.290 Temperature: OK 00:18:34.290 Device Reliability: OK 00:18:34.290 Read Only: No 00:18:34.290 Volatile Memory Backup: OK 00:18:34.290 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:34.290 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:34.290 Available Spare: 0% 00:18:34.290 Available Sp[2024-12-15 05:18:47.855343] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:34.290 [2024-12-15 05:18:47.855354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:34.290 [2024-12-15 05:18:47.855378] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:18:34.290 [2024-12-15 05:18:47.855386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.290 [2024-12-15 05:18:47.855392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.290 [2024-12-15 05:18:47.855397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.290 [2024-12-15 05:18:47.855402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:34.290 [2024-12-15 05:18:47.859000] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:18:34.290 [2024-12-15 05:18:47.859013] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:18:34.290 [2024-12-15 05:18:47.859533] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:34.290 [2024-12-15 05:18:47.859580] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:18:34.290 [2024-12-15 05:18:47.859585] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:18:34.290 [2024-12-15 05:18:47.860533] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:18:34.290 [2024-12-15 05:18:47.860544] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:18:34.290 [2024-12-15 05:18:47.860594] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:18:34.290 [2024-12-15 05:18:47.861553] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:34.290 are Threshold: 0% 00:18:34.290 Life Percentage Used: 0% 00:18:34.290 Data Units Read: 0 00:18:34.290 Data Units Written: 0 00:18:34.290 Host Read Commands: 0 00:18:34.290 Host Write Commands: 0 00:18:34.290 Controller Busy Time: 0 minutes 00:18:34.290 Power Cycles: 0 00:18:34.290 Power On Hours: 0 hours 00:18:34.290 Unsafe Shutdowns: 0 00:18:34.290 Unrecoverable Media Errors: 0 00:18:34.290 Lifetime Error Log Entries: 0 00:18:34.290 Warning Temperature Time: 0 minutes 00:18:34.290 Critical Temperature Time: 0 minutes 00:18:34.290 00:18:34.290 Number of Queues 00:18:34.290 ================ 00:18:34.290 Number of I/O Submission Queues: 127 00:18:34.290 Number of I/O Completion Queues: 127 00:18:34.290 00:18:34.290 Active Namespaces 00:18:34.290 ================= 00:18:34.290 Namespace ID:1 00:18:34.290 Error Recovery Timeout: Unlimited 00:18:34.290 Command Set Identifier: NVM (00h) 00:18:34.290 Deallocate: Supported 00:18:34.290 Deallocated/Unwritten Error: Not Supported 00:18:34.290 Deallocated Read Value: Unknown 00:18:34.290 Deallocate in Write Zeroes: Not Supported 00:18:34.290 Deallocated Guard Field: 0xFFFF 00:18:34.290 Flush: Supported 00:18:34.290 Reservation: Supported 00:18:34.290 Namespace Sharing Capabilities: Multiple Controllers 00:18:34.290 Size (in LBAs): 131072 (0GiB) 00:18:34.290 Capacity (in LBAs): 131072 (0GiB) 00:18:34.290 Utilization (in LBAs): 131072 (0GiB) 00:18:34.290 NGUID: F67EC4CD5C1947369078541F62C69FD5 00:18:34.290 UUID: f67ec4cd-5c19-4736-9078-541f62c69fd5 00:18:34.290 Thin Provisioning: Not Supported 00:18:34.290 Per-NS Atomic Units: Yes 00:18:34.290 Atomic Boundary Size (Normal): 0 00:18:34.290 Atomic Boundary Size (PFail): 0 00:18:34.290 Atomic Boundary Offset: 0 00:18:34.290 Maximum Single Source Range Length: 65535 00:18:34.290 Maximum Copy Length: 65535 00:18:34.290 Maximum Source Range Count: 1 00:18:34.290 NGUID/EUI64 Never Reused: No 00:18:34.290 Namespace Write Protected: No 00:18:34.290 Number of LBA Formats: 1 00:18:34.290 Current LBA Format: LBA Format #00 00:18:34.290 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:34.290 00:18:34.290 05:18:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:34.549 [2024-12-15 05:18:48.096603] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:39.823 Initializing NVMe Controllers 00:18:39.823 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:39.823 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:39.823 Initialization complete. Launching workers. 00:18:39.823 ======================================================== 00:18:39.823 Latency(us) 00:18:39.823 Device Information : IOPS MiB/s Average min max 00:18:39.823 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39908.33 155.89 3206.96 958.36 6674.11 00:18:39.823 ======================================================== 00:18:39.823 Total : 39908.33 155.89 3206.96 958.36 6674.11 00:18:39.823 00:18:39.823 [2024-12-15 05:18:53.114180] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:39.823 05:18:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:18:39.823 [2024-12-15 05:18:53.348319] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:45.099 Initializing NVMe Controllers 00:18:45.099 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:45.099 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:18:45.099 Initialization complete. Launching workers. 00:18:45.099 ======================================================== 00:18:45.099 Latency(us) 00:18:45.099 Device Information : IOPS MiB/s Average min max 00:18:45.099 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15905.60 62.13 8055.94 5598.41 15963.87 00:18:45.099 ======================================================== 00:18:45.099 Total : 15905.60 62.13 8055.94 5598.41 15963.87 00:18:45.099 00:18:45.099 [2024-12-15 05:18:58.383910] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:45.099 05:18:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:18:45.099 [2024-12-15 05:18:58.594874] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:50.375 [2024-12-15 05:19:03.667266] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:50.375 Initializing NVMe Controllers 00:18:50.375 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:50.375 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:18:50.375 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:18:50.375 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:18:50.375 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:18:50.375 Initialization complete. Launching workers. 00:18:50.375 Starting thread on core 2 00:18:50.375 Starting thread on core 3 00:18:50.375 Starting thread on core 1 00:18:50.375 05:19:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:18:50.375 [2024-12-15 05:19:03.966437] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:53.662 [2024-12-15 05:19:07.034637] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:53.662 Initializing NVMe Controllers 00:18:53.662 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:53.662 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:53.662 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:18:53.662 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:18:53.662 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:18:53.662 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:18:53.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:18:53.662 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:18:53.662 Initialization complete. Launching workers. 00:18:53.662 Starting thread on core 1 with urgent priority queue 00:18:53.662 Starting thread on core 2 with urgent priority queue 00:18:53.662 Starting thread on core 3 with urgent priority queue 00:18:53.662 Starting thread on core 0 with urgent priority queue 00:18:53.662 SPDK bdev Controller (SPDK1 ) core 0: 8753.00 IO/s 11.42 secs/100000 ios 00:18:53.662 SPDK bdev Controller (SPDK1 ) core 1: 9950.00 IO/s 10.05 secs/100000 ios 00:18:53.662 SPDK bdev Controller (SPDK1 ) core 2: 8138.67 IO/s 12.29 secs/100000 ios 00:18:53.662 SPDK bdev Controller (SPDK1 ) core 3: 8319.33 IO/s 12.02 secs/100000 ios 00:18:53.662 ======================================================== 00:18:53.662 00:18:53.662 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:53.662 [2024-12-15 05:19:07.321469] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:53.921 Initializing NVMe Controllers 00:18:53.921 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:53.921 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:53.921 Namespace ID: 1 size: 0GB 00:18:53.921 Initialization complete. 00:18:53.921 INFO: using host memory buffer for IO 00:18:53.921 Hello world! 00:18:53.921 [2024-12-15 05:19:07.355718] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:53.921 05:19:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:18:54.180 [2024-12-15 05:19:07.633391] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:55.119 Initializing NVMe Controllers 00:18:55.119 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:55.119 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:55.119 Initialization complete. Launching workers. 00:18:55.119 submit (in ns) avg, min, max = 6825.2, 3117.1, 7989830.5 00:18:55.119 complete (in ns) avg, min, max = 20123.7, 1710.5, 3999854.3 00:18:55.119 00:18:55.119 Submit histogram 00:18:55.119 ================ 00:18:55.119 Range in us Cumulative Count 00:18:55.119 3.109 - 3.124: 0.0062% ( 1) 00:18:55.119 3.124 - 3.139: 0.0433% ( 6) 00:18:55.119 3.139 - 3.154: 0.0743% ( 5) 00:18:55.119 3.154 - 3.170: 0.1362% ( 10) 00:18:55.119 3.170 - 3.185: 0.2353% ( 16) 00:18:55.119 3.185 - 3.200: 1.0773% ( 136) 00:18:55.119 3.200 - 3.215: 4.2908% ( 519) 00:18:55.119 3.215 - 3.230: 9.6527% ( 866) 00:18:55.119 3.230 - 3.246: 15.9371% ( 1015) 00:18:55.119 3.246 - 3.261: 23.1317% ( 1162) 00:18:55.119 3.261 - 3.276: 31.2798% ( 1316) 00:18:55.119 3.276 - 3.291: 37.5642% ( 1015) 00:18:55.119 3.291 - 3.307: 43.0933% ( 893) 00:18:55.119 3.307 - 3.322: 47.7060% ( 745) 00:18:55.119 3.322 - 3.337: 51.9163% ( 680) 00:18:55.119 3.337 - 3.352: 55.6003% ( 595) 00:18:55.119 3.352 - 3.368: 61.4946% ( 952) 00:18:55.119 3.368 - 3.383: 68.7078% ( 1165) 00:18:55.119 3.383 - 3.398: 74.0945% ( 870) 00:18:55.119 3.398 - 3.413: 79.7907% ( 920) 00:18:55.119 3.413 - 3.429: 83.2518% ( 559) 00:18:55.119 3.429 - 3.444: 85.6046% ( 380) 00:18:55.119 3.444 - 3.459: 86.7129% ( 179) 00:18:55.119 3.459 - 3.474: 87.2949% ( 94) 00:18:55.119 3.474 - 3.490: 87.7840% ( 79) 00:18:55.119 3.490 - 3.505: 88.2608% ( 77) 00:18:55.119 3.505 - 3.520: 89.0038% ( 120) 00:18:55.119 3.520 - 3.535: 89.9697% ( 156) 00:18:55.119 3.535 - 3.550: 91.0284% ( 171) 00:18:55.119 3.550 - 3.566: 92.0500% ( 165) 00:18:55.119 3.566 - 3.581: 92.8673% ( 132) 00:18:55.119 3.581 - 3.596: 93.6103% ( 120) 00:18:55.119 3.596 - 3.611: 94.4090% ( 129) 00:18:55.119 3.611 - 3.627: 95.3254% ( 148) 00:18:55.119 3.627 - 3.642: 96.2170% ( 144) 00:18:55.119 3.642 - 3.657: 96.9476% ( 118) 00:18:55.119 3.657 - 3.672: 97.6286% ( 110) 00:18:55.119 3.672 - 3.688: 98.1735% ( 88) 00:18:55.119 3.688 - 3.703: 98.5759% ( 65) 00:18:55.119 3.703 - 3.718: 98.9041% ( 53) 00:18:55.119 3.718 - 3.733: 99.1518% ( 40) 00:18:55.119 3.733 - 3.749: 99.3189% ( 27) 00:18:55.119 3.749 - 3.764: 99.4675% ( 24) 00:18:55.119 3.764 - 3.779: 99.5480% ( 13) 00:18:55.119 3.779 - 3.794: 99.5728% ( 4) 00:18:55.119 3.794 - 3.810: 99.5914% ( 3) 00:18:55.119 3.810 - 3.825: 99.6037% ( 2) 00:18:55.119 3.825 - 3.840: 99.6223% ( 3) 00:18:55.119 3.855 - 3.870: 99.6347% ( 2) 00:18:55.119 5.181 - 5.211: 99.6409% ( 1) 00:18:55.119 5.242 - 5.272: 99.6471% ( 1) 00:18:55.119 5.303 - 5.333: 99.6533% ( 1) 00:18:55.119 5.425 - 5.455: 99.6595% ( 1) 00:18:55.119 5.638 - 5.669: 99.6657% ( 1) 00:18:55.119 5.669 - 5.699: 99.6718% ( 1) 00:18:55.119 5.699 - 5.730: 99.6780% ( 1) 00:18:55.119 5.730 - 5.760: 99.6904% ( 2) 00:18:55.119 5.790 - 5.821: 99.6966% ( 1) 00:18:55.119 5.912 - 5.943: 99.7028% ( 1) 00:18:55.119 5.943 - 5.973: 99.7090% ( 1) 00:18:55.119 6.034 - 6.065: 99.7214% ( 2) 00:18:55.119 6.126 - 6.156: 99.7276% ( 1) 00:18:55.119 6.156 - 6.187: 99.7338% ( 1) 00:18:55.119 6.217 - 6.248: 99.7461% ( 2) 00:18:55.119 6.278 - 6.309: 99.7523% ( 1) 00:18:55.119 6.400 - 6.430: 99.7585% ( 1) 00:18:55.119 6.461 - 6.491: 99.7647% ( 1) 00:18:55.119 6.491 - 6.522: 99.7709% ( 1) 00:18:55.119 6.583 - 6.613: 99.7833% ( 2) 00:18:55.119 6.613 - 6.644: 99.7895% ( 1) 00:18:55.119 6.674 - 6.705: 99.7957% ( 1) 00:18:55.119 6.705 - 6.735: 99.8019% ( 1) 00:18:55.119 6.735 - 6.766: 99.8081% ( 1) 00:18:55.119 7.010 - 7.040: 99.8143% ( 1) 00:18:55.119 7.070 - 7.101: 99.8204% ( 1) 00:18:55.119 7.162 - 7.192: 99.8266% ( 1) 00:18:55.119 7.497 - 7.528: 99.8328% ( 1) 00:18:55.119 7.558 - 7.589: 99.8390% ( 1) 00:18:55.119 7.680 - 7.710: 99.8452% ( 1) 00:18:55.119 [2024-12-15 05:19:08.649330] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:55.119 7.741 - 7.771: 99.8514% ( 1) 00:18:55.119 7.924 - 7.985: 99.8576% ( 1) 00:18:55.119 8.290 - 8.350: 99.8638% ( 1) 00:18:55.119 8.411 - 8.472: 99.8700% ( 1) 00:18:55.119 8.472 - 8.533: 99.8824% ( 2) 00:18:55.119 8.533 - 8.594: 99.8886% ( 1) 00:18:55.119 10.118 - 10.179: 99.8947% ( 1) 00:18:55.119 10.971 - 11.032: 99.9009% ( 1) 00:18:55.119 12.373 - 12.434: 99.9071% ( 1) 00:18:55.119 13.836 - 13.897: 99.9133% ( 1) 00:18:55.119 23.528 - 23.650: 99.9195% ( 1) 00:18:55.119 3994.575 - 4025.783: 99.9938% ( 12) 00:18:55.119 7989.150 - 8051.566: 100.0000% ( 1) 00:18:55.119 00:18:55.119 Complete histogram 00:18:55.119 ================== 00:18:55.119 Range in us Cumulative Count 00:18:55.119 1.707 - 1.714: 0.0062% ( 1) 00:18:55.119 1.714 - 1.722: 0.0371% ( 5) 00:18:55.119 1.722 - 1.730: 0.1238% ( 14) 00:18:55.119 1.730 - 1.737: 0.2477% ( 20) 00:18:55.119 1.737 - 1.745: 0.3343% ( 14) 00:18:55.119 1.745 - 1.752: 0.3963% ( 10) 00:18:55.119 1.752 - 1.760: 0.4148% ( 3) 00:18:55.119 1.760 - 1.768: 0.6687% ( 41) 00:18:55.119 1.768 - 1.775: 3.3930% ( 440) 00:18:55.119 1.775 - 1.783: 14.8164% ( 1845) 00:18:55.119 1.783 - 1.790: 31.8866% ( 2757) 00:18:55.119 1.790 - 1.798: 43.9725% ( 1952) 00:18:55.119 1.798 - 1.806: 49.2849% ( 858) 00:18:55.119 1.806 - 1.813: 51.4086% ( 343) 00:18:55.119 1.813 - 1.821: 52.9131% ( 243) 00:18:55.119 1.821 - 1.829: 55.6684% ( 445) 00:18:55.119 1.829 - 1.836: 64.3675% ( 1405) 00:18:55.119 1.836 - 1.844: 78.0942% ( 2217) 00:18:55.119 1.844 - 1.851: 88.4280% ( 1669) 00:18:55.119 1.851 - 1.859: 93.4679% ( 814) 00:18:55.119 1.859 - 1.867: 95.6535% ( 353) 00:18:55.119 1.867 - 1.874: 96.8113% ( 187) 00:18:55.119 1.874 - 1.882: 97.3686% ( 90) 00:18:55.119 1.882 - 1.890: 97.7029% ( 54) 00:18:55.119 1.890 - 1.897: 97.9073% ( 33) 00:18:55.119 1.897 - 1.905: 98.2416% ( 54) 00:18:55.119 1.905 - 1.912: 98.5883% ( 56) 00:18:55.119 1.912 - 1.920: 98.7865% ( 32) 00:18:55.119 1.920 - 1.928: 99.0217% ( 38) 00:18:55.119 1.928 - 1.935: 99.0960% ( 12) 00:18:55.119 1.935 - 1.943: 99.2013% ( 17) 00:18:55.119 1.943 - 1.950: 99.2570% ( 9) 00:18:55.119 1.950 - 1.966: 99.3251% ( 11) 00:18:55.119 1.966 - 1.981: 99.3499% ( 4) 00:18:55.119 1.996 - 2.011: 99.3561% ( 1) 00:18:55.119 2.103 - 2.118: 99.3685% ( 2) 00:18:55.119 3.535 - 3.550: 99.3747% ( 1) 00:18:55.119 3.992 - 4.023: 99.3808% ( 1) 00:18:55.119 4.053 - 4.084: 99.3870% ( 1) 00:18:55.119 4.267 - 4.297: 99.3932% ( 1) 00:18:55.119 4.450 - 4.480: 99.3994% ( 1) 00:18:55.119 4.510 - 4.541: 99.4056% ( 1) 00:18:55.119 4.571 - 4.602: 99.4118% ( 1) 00:18:55.119 4.846 - 4.876: 99.4242% ( 2) 00:18:55.119 4.907 - 4.937: 99.4304% ( 1) 00:18:55.119 4.968 - 4.998: 99.4366% ( 1) 00:18:55.119 5.029 - 5.059: 99.4428% ( 1) 00:18:55.119 5.059 - 5.090: 99.4490% ( 1) 00:18:55.119 5.090 - 5.120: 99.4613% ( 2) 00:18:55.119 5.120 - 5.150: 99.4675% ( 1) 00:18:55.119 5.272 - 5.303: 99.4737% ( 1) 00:18:55.120 5.364 - 5.394: 99.4799% ( 1) 00:18:55.120 5.425 - 5.455: 99.4861% ( 1) 00:18:55.120 5.547 - 5.577: 99.4923% ( 1) 00:18:55.120 5.608 - 5.638: 99.4985% ( 1) 00:18:55.120 5.699 - 5.730: 99.5047% ( 1) 00:18:55.120 5.882 - 5.912: 99.5109% ( 1) 00:18:55.120 5.973 - 6.004: 99.5171% ( 1) 00:18:55.120 6.370 - 6.400: 99.5232% ( 1) 00:18:55.120 6.857 - 6.888: 99.5294% ( 1) 00:18:55.120 9.082 - 9.143: 99.5356% ( 1) 00:18:55.120 17.432 - 17.554: 99.5418% ( 1) 00:18:55.120 3994.575 - 4025.783: 100.0000% ( 74) 00:18:55.120 00:18:55.120 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:18:55.120 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:18:55.120 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:18:55.120 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:18:55.120 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:55.396 [ 00:18:55.396 { 00:18:55.396 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:55.396 "subtype": "Discovery", 00:18:55.396 "listen_addresses": [], 00:18:55.396 "allow_any_host": true, 00:18:55.396 "hosts": [] 00:18:55.396 }, 00:18:55.396 { 00:18:55.396 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:55.396 "subtype": "NVMe", 00:18:55.396 "listen_addresses": [ 00:18:55.396 { 00:18:55.396 "trtype": "VFIOUSER", 00:18:55.396 "adrfam": "IPv4", 00:18:55.396 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:55.396 "trsvcid": "0" 00:18:55.396 } 00:18:55.396 ], 00:18:55.396 "allow_any_host": true, 00:18:55.396 "hosts": [], 00:18:55.396 "serial_number": "SPDK1", 00:18:55.396 "model_number": "SPDK bdev Controller", 00:18:55.396 "max_namespaces": 32, 00:18:55.396 "min_cntlid": 1, 00:18:55.396 "max_cntlid": 65519, 00:18:55.396 "namespaces": [ 00:18:55.396 { 00:18:55.396 "nsid": 1, 00:18:55.396 "bdev_name": "Malloc1", 00:18:55.396 "name": "Malloc1", 00:18:55.396 "nguid": "F67EC4CD5C1947369078541F62C69FD5", 00:18:55.396 "uuid": "f67ec4cd-5c19-4736-9078-541f62c69fd5" 00:18:55.396 } 00:18:55.396 ] 00:18:55.396 }, 00:18:55.396 { 00:18:55.396 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:55.396 "subtype": "NVMe", 00:18:55.396 "listen_addresses": [ 00:18:55.396 { 00:18:55.396 "trtype": "VFIOUSER", 00:18:55.396 "adrfam": "IPv4", 00:18:55.396 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:55.396 "trsvcid": "0" 00:18:55.396 } 00:18:55.396 ], 00:18:55.396 "allow_any_host": true, 00:18:55.396 "hosts": [], 00:18:55.396 "serial_number": "SPDK2", 00:18:55.396 "model_number": "SPDK bdev Controller", 00:18:55.396 "max_namespaces": 32, 00:18:55.396 "min_cntlid": 1, 00:18:55.396 "max_cntlid": 65519, 00:18:55.396 "namespaces": [ 00:18:55.396 { 00:18:55.396 "nsid": 1, 00:18:55.396 "bdev_name": "Malloc2", 00:18:55.396 "name": "Malloc2", 00:18:55.396 "nguid": "966E53FCDD75482FB1D25625CA935124", 00:18:55.396 "uuid": "966e53fc-dd75-482f-b1d2-5625ca935124" 00:18:55.396 } 00:18:55.396 ] 00:18:55.396 } 00:18:55.396 ] 00:18:55.396 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:18:55.396 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:18:55.396 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=300850 00:18:55.396 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:18:55.396 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:18:55.396 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:55.396 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:18:55.396 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:18:55.396 05:19:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:55.396 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:55.396 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:18:55.396 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:18:55.396 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:18:55.396 [2024-12-15 05:19:09.037513] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:18:55.656 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:55.656 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:18:55.656 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:18:55.656 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:18:55.656 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:18:55.656 Malloc3 00:18:55.656 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:18:55.915 [2024-12-15 05:19:09.503965] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:18:55.915 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:18:55.915 Asynchronous Event Request test 00:18:55.915 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:18:55.915 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:18:55.915 Registering asynchronous event callbacks... 00:18:55.915 Starting namespace attribute notice tests for all controllers... 00:18:55.915 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:18:55.915 aer_cb - Changed Namespace 00:18:55.915 Cleaning up... 00:18:56.175 [ 00:18:56.175 { 00:18:56.175 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:18:56.175 "subtype": "Discovery", 00:18:56.175 "listen_addresses": [], 00:18:56.175 "allow_any_host": true, 00:18:56.175 "hosts": [] 00:18:56.175 }, 00:18:56.175 { 00:18:56.175 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:18:56.175 "subtype": "NVMe", 00:18:56.175 "listen_addresses": [ 00:18:56.175 { 00:18:56.175 "trtype": "VFIOUSER", 00:18:56.175 "adrfam": "IPv4", 00:18:56.175 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:18:56.175 "trsvcid": "0" 00:18:56.175 } 00:18:56.175 ], 00:18:56.175 "allow_any_host": true, 00:18:56.175 "hosts": [], 00:18:56.175 "serial_number": "SPDK1", 00:18:56.175 "model_number": "SPDK bdev Controller", 00:18:56.175 "max_namespaces": 32, 00:18:56.175 "min_cntlid": 1, 00:18:56.175 "max_cntlid": 65519, 00:18:56.175 "namespaces": [ 00:18:56.175 { 00:18:56.175 "nsid": 1, 00:18:56.175 "bdev_name": "Malloc1", 00:18:56.175 "name": "Malloc1", 00:18:56.175 "nguid": "F67EC4CD5C1947369078541F62C69FD5", 00:18:56.175 "uuid": "f67ec4cd-5c19-4736-9078-541f62c69fd5" 00:18:56.175 }, 00:18:56.175 { 00:18:56.175 "nsid": 2, 00:18:56.175 "bdev_name": "Malloc3", 00:18:56.175 "name": "Malloc3", 00:18:56.175 "nguid": "E47169EF5E874C60AB2A5B254F318184", 00:18:56.175 "uuid": "e47169ef-5e87-4c60-ab2a-5b254f318184" 00:18:56.175 } 00:18:56.175 ] 00:18:56.175 }, 00:18:56.175 { 00:18:56.175 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:18:56.175 "subtype": "NVMe", 00:18:56.175 "listen_addresses": [ 00:18:56.175 { 00:18:56.175 "trtype": "VFIOUSER", 00:18:56.175 "adrfam": "IPv4", 00:18:56.175 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:18:56.175 "trsvcid": "0" 00:18:56.175 } 00:18:56.175 ], 00:18:56.175 "allow_any_host": true, 00:18:56.175 "hosts": [], 00:18:56.175 "serial_number": "SPDK2", 00:18:56.175 "model_number": "SPDK bdev Controller", 00:18:56.175 "max_namespaces": 32, 00:18:56.175 "min_cntlid": 1, 00:18:56.175 "max_cntlid": 65519, 00:18:56.175 "namespaces": [ 00:18:56.175 { 00:18:56.175 "nsid": 1, 00:18:56.175 "bdev_name": "Malloc2", 00:18:56.175 "name": "Malloc2", 00:18:56.175 "nguid": "966E53FCDD75482FB1D25625CA935124", 00:18:56.175 "uuid": "966e53fc-dd75-482f-b1d2-5625ca935124" 00:18:56.175 } 00:18:56.175 ] 00:18:56.175 } 00:18:56.175 ] 00:18:56.175 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 300850 00:18:56.175 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:18:56.175 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:18:56.175 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:18:56.175 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:18:56.175 [2024-12-15 05:19:09.749398] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:18:56.175 [2024-12-15 05:19:09.749433] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid300925 ] 00:18:56.175 [2024-12-15 05:19:09.787188] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:18:56.175 [2024-12-15 05:19:09.796236] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:56.175 [2024-12-15 05:19:09.796259] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f67d4bca000 00:18:56.175 [2024-12-15 05:19:09.797237] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:56.175 [2024-12-15 05:19:09.798240] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:56.175 [2024-12-15 05:19:09.799242] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:56.175 [2024-12-15 05:19:09.800248] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:56.175 [2024-12-15 05:19:09.801260] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:56.175 [2024-12-15 05:19:09.802271] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:56.175 [2024-12-15 05:19:09.803282] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:18:56.175 [2024-12-15 05:19:09.804289] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:18:56.175 [2024-12-15 05:19:09.805299] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:18:56.175 [2024-12-15 05:19:09.805311] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f67d38d3000 00:18:56.175 [2024-12-15 05:19:09.806222] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:56.175 [2024-12-15 05:19:09.820191] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:18:56.175 [2024-12-15 05:19:09.820216] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:18:56.175 [2024-12-15 05:19:09.822291] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:56.175 [2024-12-15 05:19:09.822332] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:18:56.175 [2024-12-15 05:19:09.822409] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:18:56.175 [2024-12-15 05:19:09.822424] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:18:56.175 [2024-12-15 05:19:09.822429] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:18:56.175 [2024-12-15 05:19:09.823288] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:18:56.175 [2024-12-15 05:19:09.823298] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:18:56.175 [2024-12-15 05:19:09.823305] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:18:56.175 [2024-12-15 05:19:09.824297] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:18:56.175 [2024-12-15 05:19:09.824306] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:18:56.175 [2024-12-15 05:19:09.824313] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:18:56.175 [2024-12-15 05:19:09.825301] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:18:56.175 [2024-12-15 05:19:09.825311] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:18:56.175 [2024-12-15 05:19:09.826309] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:18:56.175 [2024-12-15 05:19:09.826318] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:18:56.175 [2024-12-15 05:19:09.826322] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:18:56.175 [2024-12-15 05:19:09.826329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:18:56.176 [2024-12-15 05:19:09.826437] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:18:56.176 [2024-12-15 05:19:09.826442] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:18:56.176 [2024-12-15 05:19:09.826447] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:18:56.176 [2024-12-15 05:19:09.827321] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:18:56.176 [2024-12-15 05:19:09.828324] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:18:56.176 [2024-12-15 05:19:09.829334] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:56.176 [2024-12-15 05:19:09.830339] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:18:56.176 [2024-12-15 05:19:09.830380] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:18:56.176 [2024-12-15 05:19:09.831351] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:18:56.176 [2024-12-15 05:19:09.831361] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:18:56.176 [2024-12-15 05:19:09.831368] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:18:56.176 [2024-12-15 05:19:09.831385] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:18:56.176 [2024-12-15 05:19:09.831393] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:18:56.176 [2024-12-15 05:19:09.831403] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:56.176 [2024-12-15 05:19:09.831408] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:56.176 [2024-12-15 05:19:09.831411] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:56.176 [2024-12-15 05:19:09.831423] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:56.176 [2024-12-15 05:19:09.838001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:18:56.176 [2024-12-15 05:19:09.838013] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:18:56.176 [2024-12-15 05:19:09.838018] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:18:56.176 [2024-12-15 05:19:09.838022] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:18:56.176 [2024-12-15 05:19:09.838026] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:18:56.176 [2024-12-15 05:19:09.838030] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:18:56.176 [2024-12-15 05:19:09.838035] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:18:56.176 [2024-12-15 05:19:09.838039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:18:56.176 [2024-12-15 05:19:09.838048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:18:56.176 [2024-12-15 05:19:09.838060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:18:56.176 [2024-12-15 05:19:09.846001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:18:56.176 [2024-12-15 05:19:09.846016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.176 [2024-12-15 05:19:09.846023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.176 [2024-12-15 05:19:09.846031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.176 [2024-12-15 05:19:09.846038] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:18:56.176 [2024-12-15 05:19:09.846043] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:18:56.176 [2024-12-15 05:19:09.846052] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:18:56.176 [2024-12-15 05:19:09.846061] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:18:56.176 [2024-12-15 05:19:09.854001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:18:56.176 [2024-12-15 05:19:09.854009] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:18:56.176 [2024-12-15 05:19:09.854014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:18:56.176 [2024-12-15 05:19:09.854020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:18:56.176 [2024-12-15 05:19:09.854026] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:18:56.176 [2024-12-15 05:19:09.854034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:56.176 [2024-12-15 05:19:09.859998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:18:56.176 [2024-12-15 05:19:09.860050] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:18:56.176 [2024-12-15 05:19:09.860060] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:18:56.176 [2024-12-15 05:19:09.860068] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:18:56.176 [2024-12-15 05:19:09.860072] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:18:56.176 [2024-12-15 05:19:09.860075] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:56.176 [2024-12-15 05:19:09.860081] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:18:56.437 [2024-12-15 05:19:09.865000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:18:56.437 [2024-12-15 05:19:09.865011] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:18:56.437 [2024-12-15 05:19:09.865022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:18:56.437 [2024-12-15 05:19:09.865030] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:18:56.437 [2024-12-15 05:19:09.865036] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:56.437 [2024-12-15 05:19:09.865040] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:56.437 [2024-12-15 05:19:09.865044] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:56.437 [2024-12-15 05:19:09.865051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:56.437 [2024-12-15 05:19:09.873000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:18:56.437 [2024-12-15 05:19:09.873014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:18:56.437 [2024-12-15 05:19:09.873022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:18:56.437 [2024-12-15 05:19:09.873028] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:18:56.437 [2024-12-15 05:19:09.873032] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:56.437 [2024-12-15 05:19:09.873037] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:56.437 [2024-12-15 05:19:09.873043] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:56.437 [2024-12-15 05:19:09.880997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:18:56.437 [2024-12-15 05:19:09.881008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:18:56.437 [2024-12-15 05:19:09.881014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:18:56.437 [2024-12-15 05:19:09.881022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:18:56.437 [2024-12-15 05:19:09.881028] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:18:56.437 [2024-12-15 05:19:09.881033] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:18:56.437 [2024-12-15 05:19:09.881037] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:18:56.437 [2024-12-15 05:19:09.881042] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:18:56.437 [2024-12-15 05:19:09.881046] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:18:56.437 [2024-12-15 05:19:09.881051] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:18:56.437 [2024-12-15 05:19:09.881066] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:18:56.437 [2024-12-15 05:19:09.889000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:18:56.437 [2024-12-15 05:19:09.889013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:18:56.437 [2024-12-15 05:19:09.897000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:18:56.437 [2024-12-15 05:19:09.897013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:18:56.437 [2024-12-15 05:19:09.905000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:18:56.437 [2024-12-15 05:19:09.905013] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:18:56.437 [2024-12-15 05:19:09.912998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:18:56.437 [2024-12-15 05:19:09.913014] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:18:56.437 [2024-12-15 05:19:09.913019] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:18:56.437 [2024-12-15 05:19:09.913022] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:18:56.437 [2024-12-15 05:19:09.913025] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:18:56.437 [2024-12-15 05:19:09.913028] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:18:56.437 [2024-12-15 05:19:09.913034] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:18:56.437 [2024-12-15 05:19:09.913043] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:18:56.437 [2024-12-15 05:19:09.913047] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:18:56.437 [2024-12-15 05:19:09.913050] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:56.437 [2024-12-15 05:19:09.913056] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:18:56.437 [2024-12-15 05:19:09.913062] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:18:56.437 [2024-12-15 05:19:09.913065] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:18:56.437 [2024-12-15 05:19:09.913069] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:56.437 [2024-12-15 05:19:09.913074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:18:56.437 [2024-12-15 05:19:09.913081] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:18:56.437 [2024-12-15 05:19:09.913085] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:18:56.437 [2024-12-15 05:19:09.913088] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:18:56.437 [2024-12-15 05:19:09.913093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:18:56.437 [2024-12-15 05:19:09.920999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:18:56.437 [2024-12-15 05:19:09.921013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:18:56.437 [2024-12-15 05:19:09.921023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:18:56.437 [2024-12-15 05:19:09.921029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:18:56.437 ===================================================== 00:18:56.437 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:18:56.437 ===================================================== 00:18:56.437 Controller Capabilities/Features 00:18:56.437 ================================ 00:18:56.437 Vendor ID: 4e58 00:18:56.437 Subsystem Vendor ID: 4e58 00:18:56.437 Serial Number: SPDK2 00:18:56.437 Model Number: SPDK bdev Controller 00:18:56.437 Firmware Version: 25.01 00:18:56.437 Recommended Arb Burst: 6 00:18:56.437 IEEE OUI Identifier: 8d 6b 50 00:18:56.437 Multi-path I/O 00:18:56.437 May have multiple subsystem ports: Yes 00:18:56.437 May have multiple controllers: Yes 00:18:56.437 Associated with SR-IOV VF: No 00:18:56.437 Max Data Transfer Size: 131072 00:18:56.437 Max Number of Namespaces: 32 00:18:56.437 Max Number of I/O Queues: 127 00:18:56.437 NVMe Specification Version (VS): 1.3 00:18:56.437 NVMe Specification Version (Identify): 1.3 00:18:56.437 Maximum Queue Entries: 256 00:18:56.437 Contiguous Queues Required: Yes 00:18:56.437 Arbitration Mechanisms Supported 00:18:56.437 Weighted Round Robin: Not Supported 00:18:56.437 Vendor Specific: Not Supported 00:18:56.438 Reset Timeout: 15000 ms 00:18:56.438 Doorbell Stride: 4 bytes 00:18:56.438 NVM Subsystem Reset: Not Supported 00:18:56.438 Command Sets Supported 00:18:56.438 NVM Command Set: Supported 00:18:56.438 Boot Partition: Not Supported 00:18:56.438 Memory Page Size Minimum: 4096 bytes 00:18:56.438 Memory Page Size Maximum: 4096 bytes 00:18:56.438 Persistent Memory Region: Not Supported 00:18:56.438 Optional Asynchronous Events Supported 00:18:56.438 Namespace Attribute Notices: Supported 00:18:56.438 Firmware Activation Notices: Not Supported 00:18:56.438 ANA Change Notices: Not Supported 00:18:56.438 PLE Aggregate Log Change Notices: Not Supported 00:18:56.438 LBA Status Info Alert Notices: Not Supported 00:18:56.438 EGE Aggregate Log Change Notices: Not Supported 00:18:56.438 Normal NVM Subsystem Shutdown event: Not Supported 00:18:56.438 Zone Descriptor Change Notices: Not Supported 00:18:56.438 Discovery Log Change Notices: Not Supported 00:18:56.438 Controller Attributes 00:18:56.438 128-bit Host Identifier: Supported 00:18:56.438 Non-Operational Permissive Mode: Not Supported 00:18:56.438 NVM Sets: Not Supported 00:18:56.438 Read Recovery Levels: Not Supported 00:18:56.438 Endurance Groups: Not Supported 00:18:56.438 Predictable Latency Mode: Not Supported 00:18:56.438 Traffic Based Keep ALive: Not Supported 00:18:56.438 Namespace Granularity: Not Supported 00:18:56.438 SQ Associations: Not Supported 00:18:56.438 UUID List: Not Supported 00:18:56.438 Multi-Domain Subsystem: Not Supported 00:18:56.438 Fixed Capacity Management: Not Supported 00:18:56.438 Variable Capacity Management: Not Supported 00:18:56.438 Delete Endurance Group: Not Supported 00:18:56.438 Delete NVM Set: Not Supported 00:18:56.438 Extended LBA Formats Supported: Not Supported 00:18:56.438 Flexible Data Placement Supported: Not Supported 00:18:56.438 00:18:56.438 Controller Memory Buffer Support 00:18:56.438 ================================ 00:18:56.438 Supported: No 00:18:56.438 00:18:56.438 Persistent Memory Region Support 00:18:56.438 ================================ 00:18:56.438 Supported: No 00:18:56.438 00:18:56.438 Admin Command Set Attributes 00:18:56.438 ============================ 00:18:56.438 Security Send/Receive: Not Supported 00:18:56.438 Format NVM: Not Supported 00:18:56.438 Firmware Activate/Download: Not Supported 00:18:56.438 Namespace Management: Not Supported 00:18:56.438 Device Self-Test: Not Supported 00:18:56.438 Directives: Not Supported 00:18:56.438 NVMe-MI: Not Supported 00:18:56.438 Virtualization Management: Not Supported 00:18:56.438 Doorbell Buffer Config: Not Supported 00:18:56.438 Get LBA Status Capability: Not Supported 00:18:56.438 Command & Feature Lockdown Capability: Not Supported 00:18:56.438 Abort Command Limit: 4 00:18:56.438 Async Event Request Limit: 4 00:18:56.438 Number of Firmware Slots: N/A 00:18:56.438 Firmware Slot 1 Read-Only: N/A 00:18:56.438 Firmware Activation Without Reset: N/A 00:18:56.438 Multiple Update Detection Support: N/A 00:18:56.438 Firmware Update Granularity: No Information Provided 00:18:56.438 Per-Namespace SMART Log: No 00:18:56.438 Asymmetric Namespace Access Log Page: Not Supported 00:18:56.438 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:18:56.438 Command Effects Log Page: Supported 00:18:56.438 Get Log Page Extended Data: Supported 00:18:56.438 Telemetry Log Pages: Not Supported 00:18:56.438 Persistent Event Log Pages: Not Supported 00:18:56.438 Supported Log Pages Log Page: May Support 00:18:56.438 Commands Supported & Effects Log Page: Not Supported 00:18:56.438 Feature Identifiers & Effects Log Page:May Support 00:18:56.438 NVMe-MI Commands & Effects Log Page: May Support 00:18:56.438 Data Area 4 for Telemetry Log: Not Supported 00:18:56.438 Error Log Page Entries Supported: 128 00:18:56.438 Keep Alive: Supported 00:18:56.438 Keep Alive Granularity: 10000 ms 00:18:56.438 00:18:56.438 NVM Command Set Attributes 00:18:56.438 ========================== 00:18:56.438 Submission Queue Entry Size 00:18:56.438 Max: 64 00:18:56.438 Min: 64 00:18:56.438 Completion Queue Entry Size 00:18:56.438 Max: 16 00:18:56.438 Min: 16 00:18:56.438 Number of Namespaces: 32 00:18:56.438 Compare Command: Supported 00:18:56.438 Write Uncorrectable Command: Not Supported 00:18:56.438 Dataset Management Command: Supported 00:18:56.438 Write Zeroes Command: Supported 00:18:56.438 Set Features Save Field: Not Supported 00:18:56.438 Reservations: Not Supported 00:18:56.438 Timestamp: Not Supported 00:18:56.438 Copy: Supported 00:18:56.438 Volatile Write Cache: Present 00:18:56.438 Atomic Write Unit (Normal): 1 00:18:56.438 Atomic Write Unit (PFail): 1 00:18:56.438 Atomic Compare & Write Unit: 1 00:18:56.438 Fused Compare & Write: Supported 00:18:56.438 Scatter-Gather List 00:18:56.438 SGL Command Set: Supported (Dword aligned) 00:18:56.438 SGL Keyed: Not Supported 00:18:56.438 SGL Bit Bucket Descriptor: Not Supported 00:18:56.438 SGL Metadata Pointer: Not Supported 00:18:56.438 Oversized SGL: Not Supported 00:18:56.438 SGL Metadata Address: Not Supported 00:18:56.438 SGL Offset: Not Supported 00:18:56.438 Transport SGL Data Block: Not Supported 00:18:56.438 Replay Protected Memory Block: Not Supported 00:18:56.438 00:18:56.438 Firmware Slot Information 00:18:56.438 ========================= 00:18:56.438 Active slot: 1 00:18:56.438 Slot 1 Firmware Revision: 25.01 00:18:56.438 00:18:56.438 00:18:56.438 Commands Supported and Effects 00:18:56.438 ============================== 00:18:56.438 Admin Commands 00:18:56.438 -------------- 00:18:56.438 Get Log Page (02h): Supported 00:18:56.438 Identify (06h): Supported 00:18:56.438 Abort (08h): Supported 00:18:56.438 Set Features (09h): Supported 00:18:56.438 Get Features (0Ah): Supported 00:18:56.438 Asynchronous Event Request (0Ch): Supported 00:18:56.438 Keep Alive (18h): Supported 00:18:56.438 I/O Commands 00:18:56.438 ------------ 00:18:56.438 Flush (00h): Supported LBA-Change 00:18:56.438 Write (01h): Supported LBA-Change 00:18:56.438 Read (02h): Supported 00:18:56.438 Compare (05h): Supported 00:18:56.438 Write Zeroes (08h): Supported LBA-Change 00:18:56.438 Dataset Management (09h): Supported LBA-Change 00:18:56.438 Copy (19h): Supported LBA-Change 00:18:56.438 00:18:56.438 Error Log 00:18:56.438 ========= 00:18:56.438 00:18:56.438 Arbitration 00:18:56.438 =========== 00:18:56.438 Arbitration Burst: 1 00:18:56.438 00:18:56.438 Power Management 00:18:56.438 ================ 00:18:56.438 Number of Power States: 1 00:18:56.438 Current Power State: Power State #0 00:18:56.438 Power State #0: 00:18:56.438 Max Power: 0.00 W 00:18:56.438 Non-Operational State: Operational 00:18:56.438 Entry Latency: Not Reported 00:18:56.438 Exit Latency: Not Reported 00:18:56.438 Relative Read Throughput: 0 00:18:56.438 Relative Read Latency: 0 00:18:56.438 Relative Write Throughput: 0 00:18:56.438 Relative Write Latency: 0 00:18:56.438 Idle Power: Not Reported 00:18:56.438 Active Power: Not Reported 00:18:56.438 Non-Operational Permissive Mode: Not Supported 00:18:56.438 00:18:56.438 Health Information 00:18:56.438 ================== 00:18:56.438 Critical Warnings: 00:18:56.438 Available Spare Space: OK 00:18:56.438 Temperature: OK 00:18:56.438 Device Reliability: OK 00:18:56.438 Read Only: No 00:18:56.438 Volatile Memory Backup: OK 00:18:56.438 Current Temperature: 0 Kelvin (-273 Celsius) 00:18:56.438 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:18:56.438 Available Spare: 0% 00:18:56.438 Available Sp[2024-12-15 05:19:09.921115] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:18:56.438 [2024-12-15 05:19:09.928999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:18:56.438 [2024-12-15 05:19:09.929029] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:18:56.438 [2024-12-15 05:19:09.929037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.438 [2024-12-15 05:19:09.929043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.438 [2024-12-15 05:19:09.929049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.438 [2024-12-15 05:19:09.929054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:18:56.438 [2024-12-15 05:19:09.929123] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:18:56.438 [2024-12-15 05:19:09.929133] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:18:56.438 [2024-12-15 05:19:09.930124] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:18:56.438 [2024-12-15 05:19:09.930168] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:18:56.438 [2024-12-15 05:19:09.930174] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:18:56.438 [2024-12-15 05:19:09.931125] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:18:56.438 [2024-12-15 05:19:09.931136] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:18:56.439 [2024-12-15 05:19:09.931186] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:18:56.439 [2024-12-15 05:19:09.932138] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:18:56.439 are Threshold: 0% 00:18:56.439 Life Percentage Used: 0% 00:18:56.439 Data Units Read: 0 00:18:56.439 Data Units Written: 0 00:18:56.439 Host Read Commands: 0 00:18:56.439 Host Write Commands: 0 00:18:56.439 Controller Busy Time: 0 minutes 00:18:56.439 Power Cycles: 0 00:18:56.439 Power On Hours: 0 hours 00:18:56.439 Unsafe Shutdowns: 0 00:18:56.439 Unrecoverable Media Errors: 0 00:18:56.439 Lifetime Error Log Entries: 0 00:18:56.439 Warning Temperature Time: 0 minutes 00:18:56.439 Critical Temperature Time: 0 minutes 00:18:56.439 00:18:56.439 Number of Queues 00:18:56.439 ================ 00:18:56.439 Number of I/O Submission Queues: 127 00:18:56.439 Number of I/O Completion Queues: 127 00:18:56.439 00:18:56.439 Active Namespaces 00:18:56.439 ================= 00:18:56.439 Namespace ID:1 00:18:56.439 Error Recovery Timeout: Unlimited 00:18:56.439 Command Set Identifier: NVM (00h) 00:18:56.439 Deallocate: Supported 00:18:56.439 Deallocated/Unwritten Error: Not Supported 00:18:56.439 Deallocated Read Value: Unknown 00:18:56.439 Deallocate in Write Zeroes: Not Supported 00:18:56.439 Deallocated Guard Field: 0xFFFF 00:18:56.439 Flush: Supported 00:18:56.439 Reservation: Supported 00:18:56.439 Namespace Sharing Capabilities: Multiple Controllers 00:18:56.439 Size (in LBAs): 131072 (0GiB) 00:18:56.439 Capacity (in LBAs): 131072 (0GiB) 00:18:56.439 Utilization (in LBAs): 131072 (0GiB) 00:18:56.439 NGUID: 966E53FCDD75482FB1D25625CA935124 00:18:56.439 UUID: 966e53fc-dd75-482f-b1d2-5625ca935124 00:18:56.439 Thin Provisioning: Not Supported 00:18:56.439 Per-NS Atomic Units: Yes 00:18:56.439 Atomic Boundary Size (Normal): 0 00:18:56.439 Atomic Boundary Size (PFail): 0 00:18:56.439 Atomic Boundary Offset: 0 00:18:56.439 Maximum Single Source Range Length: 65535 00:18:56.439 Maximum Copy Length: 65535 00:18:56.439 Maximum Source Range Count: 1 00:18:56.439 NGUID/EUI64 Never Reused: No 00:18:56.439 Namespace Write Protected: No 00:18:56.439 Number of LBA Formats: 1 00:18:56.439 Current LBA Format: LBA Format #00 00:18:56.439 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:56.439 00:18:56.439 05:19:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:18:56.698 [2024-12-15 05:19:10.160191] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:02.052 Initializing NVMe Controllers 00:19:02.052 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:02.052 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:02.052 Initialization complete. Launching workers. 00:19:02.052 ======================================================== 00:19:02.052 Latency(us) 00:19:02.052 Device Information : IOPS MiB/s Average min max 00:19:02.052 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39919.48 155.94 3206.30 973.77 10588.28 00:19:02.052 ======================================================== 00:19:02.052 Total : 39919.48 155.94 3206.30 973.77 10588.28 00:19:02.052 00:19:02.052 [2024-12-15 05:19:15.262241] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:02.052 05:19:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:19:02.052 [2024-12-15 05:19:15.490946] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:07.586 Initializing NVMe Controllers 00:19:07.586 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:07.586 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:19:07.586 Initialization complete. Launching workers. 00:19:07.586 ======================================================== 00:19:07.586 Latency(us) 00:19:07.586 Device Information : IOPS MiB/s Average min max 00:19:07.586 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39836.75 155.61 3212.72 1001.06 10563.95 00:19:07.586 ======================================================== 00:19:07.586 Total : 39836.75 155.61 3212.72 1001.06 10563.95 00:19:07.586 00:19:07.586 [2024-12-15 05:19:20.510670] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:07.586 05:19:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:19:07.586 [2024-12-15 05:19:20.713861] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:12.938 [2024-12-15 05:19:25.854144] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:12.938 Initializing NVMe Controllers 00:19:12.938 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:12.938 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:19:12.938 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:19:12.938 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:19:12.938 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:19:12.938 Initialization complete. Launching workers. 00:19:12.938 Starting thread on core 2 00:19:12.938 Starting thread on core 3 00:19:12.938 Starting thread on core 1 00:19:12.938 05:19:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:19:12.938 [2024-12-15 05:19:26.146459] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:15.576 [2024-12-15 05:19:29.204215] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:15.576 Initializing NVMe Controllers 00:19:15.576 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:15.576 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:15.576 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:19:15.576 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:19:15.576 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:19:15.576 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:19:15.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:19:15.576 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:19:15.576 Initialization complete. Launching workers. 00:19:15.576 Starting thread on core 1 with urgent priority queue 00:19:15.576 Starting thread on core 2 with urgent priority queue 00:19:15.576 Starting thread on core 3 with urgent priority queue 00:19:15.576 Starting thread on core 0 with urgent priority queue 00:19:15.576 SPDK bdev Controller (SPDK2 ) core 0: 7101.67 IO/s 14.08 secs/100000 ios 00:19:15.576 SPDK bdev Controller (SPDK2 ) core 1: 5320.00 IO/s 18.80 secs/100000 ios 00:19:15.576 SPDK bdev Controller (SPDK2 ) core 2: 5334.33 IO/s 18.75 secs/100000 ios 00:19:15.577 SPDK bdev Controller (SPDK2 ) core 3: 6009.33 IO/s 16.64 secs/100000 ios 00:19:15.577 ======================================================== 00:19:15.577 00:19:15.577 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:15.861 [2024-12-15 05:19:29.482662] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:15.861 Initializing NVMe Controllers 00:19:15.861 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:15.861 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:15.861 Namespace ID: 1 size: 0GB 00:19:15.861 Initialization complete. 00:19:15.861 INFO: using host memory buffer for IO 00:19:15.861 Hello world! 00:19:15.861 [2024-12-15 05:19:29.494755] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:15.861 05:19:29 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:19:16.146 [2024-12-15 05:19:29.767296] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:17.620 Initializing NVMe Controllers 00:19:17.620 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:17.620 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:17.620 Initialization complete. Launching workers. 00:19:17.620 submit (in ns) avg, min, max = 6743.7, 3165.7, 4005535.2 00:19:17.620 complete (in ns) avg, min, max = 21045.7, 1765.7, 5992928.6 00:19:17.620 00:19:17.620 Submit histogram 00:19:17.620 ================ 00:19:17.620 Range in us Cumulative Count 00:19:17.620 3.154 - 3.170: 0.0060% ( 1) 00:19:17.620 3.170 - 3.185: 0.0480% ( 7) 00:19:17.620 3.185 - 3.200: 0.6773% ( 105) 00:19:17.620 3.200 - 3.215: 3.8420% ( 528) 00:19:17.620 3.215 - 3.230: 9.2544% ( 903) 00:19:17.620 3.230 - 3.246: 14.7686% ( 920) 00:19:17.620 3.246 - 3.261: 21.3977% ( 1106) 00:19:17.620 3.261 - 3.276: 28.9619% ( 1262) 00:19:17.620 3.276 - 3.291: 35.2973% ( 1057) 00:19:17.620 3.291 - 3.307: 40.5298% ( 873) 00:19:17.620 3.307 - 3.322: 44.7255% ( 700) 00:19:17.620 3.322 - 3.337: 49.1789% ( 743) 00:19:17.620 3.337 - 3.352: 52.9010% ( 621) 00:19:17.620 3.352 - 3.368: 58.9847% ( 1015) 00:19:17.620 3.368 - 3.383: 65.8835% ( 1151) 00:19:17.620 3.383 - 3.398: 70.6186% ( 790) 00:19:17.620 3.398 - 3.413: 75.5514% ( 823) 00:19:17.620 3.413 - 3.429: 79.2736% ( 621) 00:19:17.620 3.429 - 3.444: 81.7190% ( 408) 00:19:17.620 3.444 - 3.459: 82.9657% ( 208) 00:19:17.620 3.459 - 3.474: 83.6190% ( 109) 00:19:17.620 3.474 - 3.490: 84.0326% ( 69) 00:19:17.620 3.490 - 3.505: 84.5421% ( 85) 00:19:17.620 3.505 - 3.520: 85.3213% ( 130) 00:19:17.620 3.520 - 3.535: 86.2443% ( 154) 00:19:17.620 3.535 - 3.550: 87.1374% ( 149) 00:19:17.620 3.550 - 3.566: 88.2282% ( 182) 00:19:17.620 3.566 - 3.581: 89.2772% ( 175) 00:19:17.620 3.581 - 3.596: 90.2421% ( 161) 00:19:17.620 3.596 - 3.611: 90.9914% ( 125) 00:19:17.620 3.611 - 3.627: 92.0523% ( 177) 00:19:17.620 3.627 - 3.642: 92.9633% ( 152) 00:19:17.620 3.642 - 3.657: 93.8864% ( 154) 00:19:17.620 3.657 - 3.672: 94.6775% ( 132) 00:19:17.620 3.672 - 3.688: 95.1990% ( 87) 00:19:17.620 3.688 - 3.703: 95.6965% ( 83) 00:19:17.620 3.703 - 3.718: 96.1340% ( 73) 00:19:17.620 3.718 - 3.733: 96.4936% ( 60) 00:19:17.620 3.733 - 3.749: 96.7933% ( 50) 00:19:17.620 3.749 - 3.764: 96.9851% ( 32) 00:19:17.620 3.764 - 3.779: 97.1230% ( 23) 00:19:17.620 3.779 - 3.794: 97.2968% ( 29) 00:19:17.620 3.794 - 3.810: 97.4287% ( 22) 00:19:17.620 3.810 - 3.825: 97.5066% ( 13) 00:19:17.620 3.825 - 3.840: 97.5845% ( 13) 00:19:17.620 3.840 - 3.855: 97.6924% ( 18) 00:19:17.620 3.855 - 3.870: 97.7643% ( 12) 00:19:17.620 3.870 - 3.886: 97.8902% ( 21) 00:19:17.620 3.886 - 3.901: 97.9741% ( 14) 00:19:17.620 3.901 - 3.931: 98.1539% ( 30) 00:19:17.620 3.931 - 3.962: 98.3457% ( 32) 00:19:17.620 3.962 - 3.992: 98.5076% ( 27) 00:19:17.620 3.992 - 4.023: 98.6694% ( 27) 00:19:17.620 4.023 - 4.053: 98.7653% ( 16) 00:19:17.620 4.053 - 4.084: 98.8552% ( 15) 00:19:17.620 4.084 - 4.114: 98.8971% ( 7) 00:19:17.620 4.114 - 4.145: 98.9511% ( 9) 00:19:17.620 4.145 - 4.175: 99.0170% ( 11) 00:19:17.620 4.175 - 4.206: 99.0650% ( 8) 00:19:17.620 4.206 - 4.236: 99.1309% ( 11) 00:19:17.620 4.236 - 4.267: 99.1609% ( 5) 00:19:17.620 4.267 - 4.297: 99.1908% ( 5) 00:19:17.620 4.297 - 4.328: 99.2268% ( 6) 00:19:17.620 4.328 - 4.358: 99.2508% ( 4) 00:19:17.620 4.358 - 4.389: 99.2807% ( 5) 00:19:17.620 4.389 - 4.419: 99.3047% ( 4) 00:19:17.620 4.419 - 4.450: 99.3107% ( 1) 00:19:17.620 4.450 - 4.480: 99.3407% ( 5) 00:19:17.620 4.480 - 4.510: 99.3707% ( 5) 00:19:17.620 4.510 - 4.541: 99.3766% ( 1) 00:19:17.620 4.541 - 4.571: 99.3826% ( 1) 00:19:17.620 4.571 - 4.602: 99.3946% ( 2) 00:19:17.620 4.602 - 4.632: 99.4066% ( 2) 00:19:17.620 4.632 - 4.663: 99.4246% ( 3) 00:19:17.620 4.663 - 4.693: 99.4366% ( 2) 00:19:17.620 4.724 - 4.754: 99.4426% ( 1) 00:19:17.620 4.754 - 4.785: 99.4486% ( 1) 00:19:17.620 4.815 - 4.846: 99.4546% ( 1) 00:19:17.620 4.846 - 4.876: 99.4606% ( 1) 00:19:17.620 4.876 - 4.907: 99.4666% ( 1) 00:19:17.620 5.029 - 5.059: 99.4725% ( 1) 00:19:17.620 5.090 - 5.120: 99.4845% ( 2) 00:19:17.620 5.181 - 5.211: 99.4905% ( 1) 00:19:17.620 5.211 - 5.242: 99.5025% ( 2) 00:19:17.620 5.272 - 5.303: 99.5085% ( 1) 00:19:17.620 5.303 - 5.333: 99.5205% ( 2) 00:19:17.620 5.333 - 5.364: 99.5265% ( 1) 00:19:17.621 5.394 - 5.425: 99.5325% ( 1) 00:19:17.621 5.425 - 5.455: 99.5385% ( 1) 00:19:17.621 5.486 - 5.516: 99.5445% ( 1) 00:19:17.621 5.577 - 5.608: 99.5565% ( 2) 00:19:17.621 5.669 - 5.699: 99.5625% ( 1) 00:19:17.621 5.699 - 5.730: 99.5684% ( 1) 00:19:17.621 5.760 - 5.790: 99.5744% ( 1) 00:19:17.621 5.790 - 5.821: 99.5804% ( 1) 00:19:17.621 5.851 - 5.882: 99.5864% ( 1) 00:19:17.621 5.882 - 5.912: 99.5984% ( 2) 00:19:17.621 5.943 - 5.973: 99.6044% ( 1) 00:19:17.621 6.004 - 6.034: 99.6104% ( 1) 00:19:17.621 6.095 - 6.126: 99.6224% ( 2) 00:19:17.621 6.217 - 6.248: 99.6284% ( 1) 00:19:17.621 6.278 - 6.309: 99.6404% ( 2) 00:19:17.621 6.309 - 6.339: 99.6464% ( 1) 00:19:17.621 6.339 - 6.370: 99.6524% ( 1) 00:19:17.621 6.400 - 6.430: 99.6584% ( 1) 00:19:17.621 6.522 - 6.552: 99.6643% ( 1) 00:19:17.621 6.613 - 6.644: 99.6763% ( 2) 00:19:17.621 6.644 - 6.674: 99.6823% ( 1) 00:19:17.621 6.674 - 6.705: 99.6883% ( 1) 00:19:17.621 6.735 - 6.766: 99.6943% ( 1) 00:19:17.621 6.766 - 6.796: 99.7003% ( 1) 00:19:17.621 6.827 - 6.857: 99.7063% ( 1) 00:19:17.621 6.888 - 6.918: 99.7123% ( 1) 00:19:17.621 7.010 - 7.040: 99.7183% ( 1) 00:19:17.621 7.101 - 7.131: 99.7243% ( 1) 00:19:17.621 7.253 - 7.284: 99.7303% ( 1) 00:19:17.621 7.375 - 7.406: 99.7363% ( 1) 00:19:17.621 7.497 - 7.528: 99.7423% ( 1) 00:19:17.621 7.528 - 7.558: 99.7483% ( 1) 00:19:17.621 7.619 - 7.650: 99.7543% ( 1) 00:19:17.621 7.710 - 7.741: 99.7602% ( 1) 00:19:17.621 7.741 - 7.771: 99.7662% ( 1) 00:19:17.621 7.802 - 7.863: 99.7902% ( 4) 00:19:17.621 7.863 - 7.924: 99.8022% ( 2) 00:19:17.621 8.107 - 8.168: 99.8082% ( 1) 00:19:17.621 8.229 - 8.290: 99.8142% ( 1) 00:19:17.621 8.594 - 8.655: 99.8202% ( 1) 00:19:17.621 8.777 - 8.838: 99.8262% ( 1) 00:19:17.621 8.838 - 8.899: 99.8382% ( 2) 00:19:17.621 8.899 - 8.960: 99.8442% ( 1) 00:19:17.621 8.960 - 9.021: 99.8502% ( 1) 00:19:17.621 9.021 - 9.082: 99.8561% ( 1) 00:19:17.621 9.204 - 9.265: 99.8621% ( 1) 00:19:17.621 9.630 - 9.691: 99.8681% ( 1) 00:19:17.621 9.813 - 9.874: 99.8741% ( 1) 00:19:17.621 10.179 - 10.240: 99.8801% ( 1) 00:19:17.621 11.276 - 11.337: 99.8861% ( 1) 00:19:17.621 11.825 - 11.886: 99.8921% ( 1) 00:19:17.621 12.678 - 12.739: 99.8981% ( 1) 00:19:17.621 13.410 - 13.470: 99.9041% ( 1) 00:19:17.621 16.213 - 16.335: 99.9101% ( 1) 00:19:17.621 21.090 - 21.211: 99.9161% ( 1) 00:19:17.621 3994.575 - 4025.783: 100.0000% ( 14) 00:19:17.621 00:19:17.621 Complete histogram 00:19:17.621 ================== 00:19:17.621 Range in us Cumulative Count 00:19:17.621 1.760 - 1.768: 0.0060% ( 1) 00:19:17.621 1.768 - 1.775: 0.3177% ( 52) 00:19:17.621 1.775 - 1.783: 3.5064% ( 532) 00:19:17.621 1.783 - 1.790: 16.7706% ( 2213) 00:19:17.621 1.790 - 1.798: 38.7677% ( 3670) 00:19:17.621 1.798 - 1.806: 54.7291% ( 2663) 00:19:17.621 1.806 - 1.813: 60.9326% ( 1035) 00:19:17.621 1.813 - 1.821: 63.5219% ( 432) 00:19:17.621 1.821 - 1.829: 64.7866% ( 211) 00:19:17.621 1.829 - 1.836: 66.2251% ( 240) 00:19:17.621 1.836 - 1.844: 70.7324% ( 752) 00:19:17.621 1.844 - 1.851: 78.8959% ( 1362) 00:19:17.621 1.851 - 1.859: 85.7708% ( 1147) 00:19:17.621 1.859 - 1.867: 88.8516% ( 514) 00:19:17.621 1.867 - 1.874: 90.4579% ( 268) 00:19:17.621 1.874 - 1.882: 91.8425% ( 231) 00:19:17.621 1.882 - 1.890: 92.7595% ( 153) 00:19:17.621 1.890 - 1.897: 93.1251% ( 61) 00:19:17.621 1.897 - 1.905: 93.2990% ( 29) 00:19:17.621 1.905 - 1.912: 93.5507% ( 42) 00:19:17.621 1.912 - 1.920: 93.7545% ( 34) 00:19:17.621 1.920 - 1.928: 93.9223% ( 28) 00:19:17.621 1.928 - 1.935: 94.0542% ( 22) 00:19:17.621 1.935 - 1.943: 94.1201% ( 11) 00:19:17.621 1.943 - 1.950: 94.1860% ( 11) 00:19:17.621 1.950 - 1.966: 94.3299% ( 24) 00:19:17.621 1.966 - 1.981: 94.5397% ( 35) 00:19:17.621 1.981 - 1.996: 94.7974% ( 43) 00:19:17.621 1.996 - 2.011: 95.0491% ( 42) 00:19:17.621 2.011 - 2.027: 95.3548% ( 51) 00:19:17.621 2.027 - 2.042: 95.8403% ( 81) 00:19:17.621 2.042 - 2.057: 96.2000% ( 60) 00:19:17.621 2.057 - 2.072: 96.5356% ( 56) 00:19:17.621 2.072 - 2.088: 96.8413% ( 51) 00:19:17.621 2.088 - 2.103: 97.0750% ( 39) 00:19:17.621 2.103 - 2.118: 97.2549% ( 30) 00:19:17.621 2.118 - 2.133: 97.4706% ( 36) 00:19:17.621 2.133 - 2.149: 97.6325% ( 27) 00:19:17.621 2.149 - 2.164: 97.7943% ( 27) 00:19:17.621 2.164 - 2.179: 97.9621% ( 28) 00:19:17.621 2.179 - 2.194: 98.1299% ( 28) 00:19:17.621 2.194 - 2.210: 98.2378% ( 18) 00:19:17.621 2.210 - 2.225: 98.3397% ( 17) 00:19:17.621 2.225 - 2.240: 98.4117% ( 12) 00:19:17.621 2.240 - 2.255: 98.5016% ( 15) 00:19:17.621 2.255 - 2.270: 98.5495% ( 8) 00:19:17.621 2.270 - 2.286: 98.5915% ( 7) 00:19:17.621 2.286 - 2.301: 98.6274% ( 6) 00:19:17.621 2.301 - 2.316: 98.6754% ( 8) 00:19:17.621 2.316 - 2.331: 98.7293% ( 9) 00:19:17.621 2.331 - 2.347: 98.7713% ( 7) 00:19:17.621 2.347 - 2.362: 98.8012% ( 5) 00:19:17.621 2.362 - 2.377: 98.8072% ( 1) 00:19:17.621 2.377 - 2.392: 98.8192% ( 2) 00:19:17.621 2.392 - 2.408: 98.8492% ( 5) 00:19:17.621 2.408 - 2.423: 98.8732% ( 4) 00:19:17.621 2.423 - 2.438: 98.8792% ( 1) 00:19:17.621 2.438 - 2.453: 98.8912% ( 2) 00:19:17.621 2.453 - 2.469: 98.9151% ( 4) 00:19:17.621 2.469 - 2.484: 98.9451% ( 5) 00:19:17.621 2.484 - 2.499: 98.9691% ( 4) 00:19:17.621 2.499 - 2.514: 98.9751% ( 1) 00:19:17.621 2.514 - 2.530: 98.9811% ( 1) 00:19:17.621 2.530 - 2.545: 98.9871% ( 1) 00:19:17.621 2.545 - 2.560: 98.9990% ( 2) 00:19:17.621 2.560 - 2.575: 99.0230% ( 4) 00:19:17.621 2.575 - 2.590: 99.0290% ( 1) 00:19:17.621 2.606 - 2.621: 99.0530% ( 4) 00:19:17.621 2.621 - 2.636: 99.0590% ( 1) 00:19:17.621 2.636 - 2.651: 99.0650% ( 1) 00:19:17.621 2.651 - 2.667: 99.0770% ( 2) 00:19:17.621 2.667 - 2.682: 99.0889% ( 2) 00:19:17.621 2.682 - 2.697: 99.1069% ( 3) 00:19:17.621 2.712 - 2.728: 99.1129% ( 1) 00:19:17.621 2.743 - 2.758: 99.1309% ( 3) 00:19:17.621 2.758 - 2.773: 99.1369% ( 1) 00:19:17.621 2.789 - 2.804: 99.1549% ( 3) 00:19:17.621 2.804 - 2.819: 99.1609% ( 1) 00:19:17.621 2.819 - 2.834: 99.1669% ( 1) 00:19:17.621 2.880 - 2.895: 99.1789% ( 2) 00:19:17.621 3.124 - 3.139: 99.1848% ( 1) 00:19:17.621 3.139 - 3.154: 99.1908% ( 1) 00:19:17.621 3.368 - 3.383: 99.1968% ( 1) 00:19:17.621 3.383 - 3.398: 99.2028% ( 1) 00:19:17.621 3.749 - 3.764: 99.2088% ( 1) 00:19:17.621 3.764 - 3.779: 99.2148% ( 1) 00:19:17.621 3.794 - 3.810: 99.2208% ( 1) 00:19:17.621 3.855 - 3.870: 99.2268% ( 1) 00:19:17.621 3.931 - 3.962: 99.2328% ( 1) 00:19:17.621 3.992 - 4.023: 99.2388% ( 1) 00:19:17.621 4.084 - 4.114: 99.2448% ( 1) 00:19:17.621 4.114 - 4.145: 99.2508% ( 1) 00:19:17.621 4.175 - 4.206: 99.2628% ( 2) 00:19:17.621 4.328 - 4.358: 99.2748% ( 2) 00:19:17.621 4.358 - 4.389: 99.2867% ( 2) 00:19:17.621 4.480 - 4.510: 99.2987% ( 2) 00:19:17.622 4.571 - 4.602: 99.3047% ( 1) 00:19:17.622 4.632 - 4.663: 99.3107% ( 1) 00:19:17.622 4.663 - 4.693: 99.3227% ( 2) 00:19:17.622 4.724 - 4.754: 99.3347% ( 2) 00:19:17.622 4.754 - 4.785: 99.3407% ( 1) 00:19:17.622 4.846 - 4.876: 99.3467% ( 1) 00:19:17.622 4.937 - 4.968: 99.3527% ( 1) 00:19:17.622 4.968 - 4.998: 99.3587% ( 1) 00:19:17.622 5.090 - 5.120: 99.3707% ( 2) 00:19:17.622 5.364 - 5.394: 99.3826% ( 2) 00:19:17.622 5.394 - 5.425: 99.3886% ( 1) 00:19:17.622 5.486 - 5.516: 99.3946% ( 1) 00:19:17.622 5.547 - 5.577: 9[2024-12-15 05:19:30.869107] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:17.622 9.4006% ( 1) 00:19:17.622 5.638 - 5.669: 99.4066% ( 1) 00:19:17.622 5.821 - 5.851: 99.4126% ( 1) 00:19:17.622 5.912 - 5.943: 99.4186% ( 1) 00:19:17.622 6.217 - 6.248: 99.4246% ( 1) 00:19:17.622 6.400 - 6.430: 99.4306% ( 1) 00:19:17.622 6.827 - 6.857: 99.4366% ( 1) 00:19:17.622 7.223 - 7.253: 99.4426% ( 1) 00:19:17.622 7.467 - 7.497: 99.4486% ( 1) 00:19:17.622 7.802 - 7.863: 99.4546% ( 1) 00:19:17.622 8.229 - 8.290: 99.4606% ( 1) 00:19:17.622 9.265 - 9.326: 99.4666% ( 1) 00:19:17.622 10.667 - 10.728: 99.4725% ( 1) 00:19:17.622 11.337 - 11.398: 99.4785% ( 1) 00:19:17.622 12.434 - 12.495: 99.4845% ( 1) 00:19:17.622 14.750 - 14.811: 99.4905% ( 1) 00:19:17.622 14.811 - 14.872: 99.4965% ( 1) 00:19:17.622 17.920 - 18.042: 99.5025% ( 1) 00:19:17.622 19.139 - 19.261: 99.5085% ( 1) 00:19:17.622 19.627 - 19.749: 99.5145% ( 1) 00:19:17.622 39.497 - 39.741: 99.5205% ( 1) 00:19:17.622 2028.495 - 2044.099: 99.5265% ( 1) 00:19:17.622 2637.044 - 2652.648: 99.5325% ( 1) 00:19:17.622 3978.971 - 3994.575: 99.5385% ( 1) 00:19:17.622 3994.575 - 4025.783: 99.9820% ( 74) 00:19:17.622 4993.219 - 5024.427: 99.9940% ( 2) 00:19:17.622 5991.863 - 6023.070: 100.0000% ( 1) 00:19:17.622 00:19:17.622 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:19:17.622 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:19:17.622 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:19:17.622 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:19:17.622 05:19:30 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:17.622 [ 00:19:17.622 { 00:19:17.622 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:17.622 "subtype": "Discovery", 00:19:17.622 "listen_addresses": [], 00:19:17.622 "allow_any_host": true, 00:19:17.622 "hosts": [] 00:19:17.622 }, 00:19:17.622 { 00:19:17.622 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:17.622 "subtype": "NVMe", 00:19:17.622 "listen_addresses": [ 00:19:17.622 { 00:19:17.622 "trtype": "VFIOUSER", 00:19:17.622 "adrfam": "IPv4", 00:19:17.622 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:17.622 "trsvcid": "0" 00:19:17.622 } 00:19:17.622 ], 00:19:17.622 "allow_any_host": true, 00:19:17.622 "hosts": [], 00:19:17.622 "serial_number": "SPDK1", 00:19:17.622 "model_number": "SPDK bdev Controller", 00:19:17.622 "max_namespaces": 32, 00:19:17.622 "min_cntlid": 1, 00:19:17.622 "max_cntlid": 65519, 00:19:17.622 "namespaces": [ 00:19:17.622 { 00:19:17.622 "nsid": 1, 00:19:17.622 "bdev_name": "Malloc1", 00:19:17.622 "name": "Malloc1", 00:19:17.622 "nguid": "F67EC4CD5C1947369078541F62C69FD5", 00:19:17.622 "uuid": "f67ec4cd-5c19-4736-9078-541f62c69fd5" 00:19:17.622 }, 00:19:17.622 { 00:19:17.622 "nsid": 2, 00:19:17.622 "bdev_name": "Malloc3", 00:19:17.622 "name": "Malloc3", 00:19:17.622 "nguid": "E47169EF5E874C60AB2A5B254F318184", 00:19:17.622 "uuid": "e47169ef-5e87-4c60-ab2a-5b254f318184" 00:19:17.622 } 00:19:17.622 ] 00:19:17.622 }, 00:19:17.622 { 00:19:17.622 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:17.622 "subtype": "NVMe", 00:19:17.622 "listen_addresses": [ 00:19:17.622 { 00:19:17.622 "trtype": "VFIOUSER", 00:19:17.622 "adrfam": "IPv4", 00:19:17.622 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:17.622 "trsvcid": "0" 00:19:17.622 } 00:19:17.622 ], 00:19:17.622 "allow_any_host": true, 00:19:17.622 "hosts": [], 00:19:17.622 "serial_number": "SPDK2", 00:19:17.622 "model_number": "SPDK bdev Controller", 00:19:17.622 "max_namespaces": 32, 00:19:17.622 "min_cntlid": 1, 00:19:17.622 "max_cntlid": 65519, 00:19:17.622 "namespaces": [ 00:19:17.622 { 00:19:17.622 "nsid": 1, 00:19:17.622 "bdev_name": "Malloc2", 00:19:17.622 "name": "Malloc2", 00:19:17.622 "nguid": "966E53FCDD75482FB1D25625CA935124", 00:19:17.622 "uuid": "966e53fc-dd75-482f-b1d2-5625ca935124" 00:19:17.622 } 00:19:17.622 ] 00:19:17.622 } 00:19:17.622 ] 00:19:17.622 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:19:17.622 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=304460 00:19:17.622 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:19:17.622 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:19:17.622 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:19:17.622 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:17.622 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:19:17.622 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=1 00:19:17.622 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:17.622 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:17.622 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:19:17.622 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # i=2 00:19:17.622 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1273 -- # sleep 0.1 00:19:17.622 [2024-12-15 05:19:31.255409] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:19:17.913 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:17.913 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:19:17.913 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:19:17.913 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:19:17.913 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:19:17.913 Malloc4 00:19:17.913 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:19:18.205 [2024-12-15 05:19:31.706886] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:19:18.205 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:19:18.205 Asynchronous Event Request test 00:19:18.205 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:19:18.205 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:19:18.205 Registering asynchronous event callbacks... 00:19:18.205 Starting namespace attribute notice tests for all controllers... 00:19:18.205 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:19:18.205 aer_cb - Changed Namespace 00:19:18.205 Cleaning up... 00:19:18.491 [ 00:19:18.491 { 00:19:18.491 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:19:18.491 "subtype": "Discovery", 00:19:18.491 "listen_addresses": [], 00:19:18.491 "allow_any_host": true, 00:19:18.491 "hosts": [] 00:19:18.491 }, 00:19:18.491 { 00:19:18.491 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:19:18.491 "subtype": "NVMe", 00:19:18.491 "listen_addresses": [ 00:19:18.491 { 00:19:18.491 "trtype": "VFIOUSER", 00:19:18.491 "adrfam": "IPv4", 00:19:18.491 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:19:18.491 "trsvcid": "0" 00:19:18.491 } 00:19:18.491 ], 00:19:18.491 "allow_any_host": true, 00:19:18.491 "hosts": [], 00:19:18.491 "serial_number": "SPDK1", 00:19:18.491 "model_number": "SPDK bdev Controller", 00:19:18.491 "max_namespaces": 32, 00:19:18.491 "min_cntlid": 1, 00:19:18.491 "max_cntlid": 65519, 00:19:18.491 "namespaces": [ 00:19:18.491 { 00:19:18.491 "nsid": 1, 00:19:18.491 "bdev_name": "Malloc1", 00:19:18.491 "name": "Malloc1", 00:19:18.491 "nguid": "F67EC4CD5C1947369078541F62C69FD5", 00:19:18.491 "uuid": "f67ec4cd-5c19-4736-9078-541f62c69fd5" 00:19:18.491 }, 00:19:18.491 { 00:19:18.491 "nsid": 2, 00:19:18.491 "bdev_name": "Malloc3", 00:19:18.491 "name": "Malloc3", 00:19:18.491 "nguid": "E47169EF5E874C60AB2A5B254F318184", 00:19:18.491 "uuid": "e47169ef-5e87-4c60-ab2a-5b254f318184" 00:19:18.491 } 00:19:18.491 ] 00:19:18.491 }, 00:19:18.491 { 00:19:18.491 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:19:18.491 "subtype": "NVMe", 00:19:18.491 "listen_addresses": [ 00:19:18.491 { 00:19:18.491 "trtype": "VFIOUSER", 00:19:18.491 "adrfam": "IPv4", 00:19:18.491 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:19:18.491 "trsvcid": "0" 00:19:18.491 } 00:19:18.491 ], 00:19:18.491 "allow_any_host": true, 00:19:18.491 "hosts": [], 00:19:18.491 "serial_number": "SPDK2", 00:19:18.491 "model_number": "SPDK bdev Controller", 00:19:18.491 "max_namespaces": 32, 00:19:18.491 "min_cntlid": 1, 00:19:18.491 "max_cntlid": 65519, 00:19:18.491 "namespaces": [ 00:19:18.491 { 00:19:18.491 "nsid": 1, 00:19:18.491 "bdev_name": "Malloc2", 00:19:18.491 "name": "Malloc2", 00:19:18.491 "nguid": "966E53FCDD75482FB1D25625CA935124", 00:19:18.491 "uuid": "966e53fc-dd75-482f-b1d2-5625ca935124" 00:19:18.491 }, 00:19:18.491 { 00:19:18.491 "nsid": 2, 00:19:18.491 "bdev_name": "Malloc4", 00:19:18.491 "name": "Malloc4", 00:19:18.491 "nguid": "D248452CEDE442C1B50A66ACFA8CEA7A", 00:19:18.491 "uuid": "d248452c-ede4-42c1-b50a-66acfa8cea7a" 00:19:18.491 } 00:19:18.491 ] 00:19:18.491 } 00:19:18.491 ] 00:19:18.491 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 304460 00:19:18.492 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:19:18.492 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 296813 00:19:18.492 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 296813 ']' 00:19:18.492 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 296813 00:19:18.492 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:18.492 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:18.492 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 296813 00:19:18.492 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:18.492 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:18.492 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 296813' 00:19:18.492 killing process with pid 296813 00:19:18.492 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 296813 00:19:18.492 05:19:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 296813 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=304696 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 304696' 00:19:18.773 Process pid: 304696 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 304696 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 304696 ']' 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:18.773 [2024-12-15 05:19:32.277053] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:19:18.773 [2024-12-15 05:19:32.277885] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:18.773 [2024-12-15 05:19:32.277922] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:18.773 [2024-12-15 05:19:32.335128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:18.773 [2024-12-15 05:19:32.358510] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:18.773 [2024-12-15 05:19:32.358546] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:18.773 [2024-12-15 05:19:32.358553] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:18.773 [2024-12-15 05:19:32.358559] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:18.773 [2024-12-15 05:19:32.358564] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:18.773 [2024-12-15 05:19:32.359918] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:18.773 [2024-12-15 05:19:32.360055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:18.773 [2024-12-15 05:19:32.360087] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.773 [2024-12-15 05:19:32.360088] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:19:18.773 [2024-12-15 05:19:32.423348] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:19:18.773 [2024-12-15 05:19:32.424411] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:19:18.773 [2024-12-15 05:19:32.424548] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:19:18.773 [2024-12-15 05:19:32.424896] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:19:18.773 [2024-12-15 05:19:32.424937] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:19:18.773 05:19:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:19:19.790 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:19:20.067 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:19:20.067 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:19:20.067 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:20.067 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:19:20.067 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:19:20.341 Malloc1 00:19:20.341 05:19:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:19:20.616 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:19:20.894 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:19:20.894 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:19:20.894 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:19:20.894 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:19:21.166 Malloc2 00:19:21.166 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:19:21.435 05:19:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:19:21.435 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:19:21.712 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:19:21.712 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 304696 00:19:21.712 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 304696 ']' 00:19:21.712 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 304696 00:19:21.712 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:19:21.712 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:21.712 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 304696 00:19:21.712 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:21.712 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:21.712 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 304696' 00:19:21.712 killing process with pid 304696 00:19:21.712 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 304696 00:19:21.712 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 304696 00:19:21.992 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:19:21.992 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:19:21.992 00:19:21.992 real 0m51.113s 00:19:21.992 user 3m17.871s 00:19:21.992 sys 0m3.230s 00:19:21.992 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:21.992 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:19:21.992 ************************************ 00:19:21.992 END TEST nvmf_vfio_user 00:19:21.992 ************************************ 00:19:21.992 05:19:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:21.992 05:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:21.992 05:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:21.992 05:19:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:21.992 ************************************ 00:19:21.992 START TEST nvmf_vfio_user_nvme_compliance 00:19:21.992 ************************************ 00:19:21.992 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:19:22.272 * Looking for test storage... 00:19:22.272 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lcov --version 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:22.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.272 --rc genhtml_branch_coverage=1 00:19:22.272 --rc genhtml_function_coverage=1 00:19:22.272 --rc genhtml_legend=1 00:19:22.272 --rc geninfo_all_blocks=1 00:19:22.272 --rc geninfo_unexecuted_blocks=1 00:19:22.272 00:19:22.272 ' 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:22.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.272 --rc genhtml_branch_coverage=1 00:19:22.272 --rc genhtml_function_coverage=1 00:19:22.272 --rc genhtml_legend=1 00:19:22.272 --rc geninfo_all_blocks=1 00:19:22.272 --rc geninfo_unexecuted_blocks=1 00:19:22.272 00:19:22.272 ' 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:22.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.272 --rc genhtml_branch_coverage=1 00:19:22.272 --rc genhtml_function_coverage=1 00:19:22.272 --rc genhtml_legend=1 00:19:22.272 --rc geninfo_all_blocks=1 00:19:22.272 --rc geninfo_unexecuted_blocks=1 00:19:22.272 00:19:22.272 ' 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:22.272 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:22.272 --rc genhtml_branch_coverage=1 00:19:22.272 --rc genhtml_function_coverage=1 00:19:22.272 --rc genhtml_legend=1 00:19:22.272 --rc geninfo_all_blocks=1 00:19:22.272 --rc geninfo_unexecuted_blocks=1 00:19:22.272 00:19:22.272 ' 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:22.272 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # : 0 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:22.273 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=305252 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 305252' 00:19:22.273 Process pid: 305252 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 305252 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 305252 ']' 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.273 05:19:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:22.273 [2024-12-15 05:19:35.889285] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:19:22.273 [2024-12-15 05:19:35.889334] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.560 [2024-12-15 05:19:35.962709] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:22.560 [2024-12-15 05:19:35.985112] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.560 [2024-12-15 05:19:35.985148] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.560 [2024-12-15 05:19:35.985155] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.560 [2024-12-15 05:19:35.985162] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.560 [2024-12-15 05:19:35.985167] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.560 [2024-12-15 05:19:35.986427] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:22.560 [2024-12-15 05:19:35.986529] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.560 [2024-12-15 05:19:35.986530] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:22.560 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.560 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:19:22.560 05:19:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:23.559 malloc0 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:23.559 05:19:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:19:23.832 00:19:23.832 00:19:23.832 CUnit - A unit testing framework for C - Version 2.1-3 00:19:23.832 http://cunit.sourceforge.net/ 00:19:23.832 00:19:23.832 00:19:23.832 Suite: nvme_compliance 00:19:23.832 Test: admin_identify_ctrlr_verify_dptr ...[2024-12-15 05:19:37.313436] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:23.832 [2024-12-15 05:19:37.314755] vfio_user.c: 832:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:19:23.832 [2024-12-15 05:19:37.314770] vfio_user.c:5544:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:19:23.832 [2024-12-15 05:19:37.314776] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:19:23.832 [2024-12-15 05:19:37.317464] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:23.832 passed 00:19:23.832 Test: admin_identify_ctrlr_verify_fused ...[2024-12-15 05:19:37.395005] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:23.832 [2024-12-15 05:19:37.398025] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:23.832 passed 00:19:23.832 Test: admin_identify_ns ...[2024-12-15 05:19:37.478157] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:24.101 [2024-12-15 05:19:37.539015] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:19:24.101 [2024-12-15 05:19:37.547003] ctrlr.c:2753:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:19:24.102 [2024-12-15 05:19:37.568094] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:24.102 passed 00:19:24.102 Test: admin_get_features_mandatory_features ...[2024-12-15 05:19:37.641837] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:24.102 [2024-12-15 05:19:37.644857] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:24.102 passed 00:19:24.102 Test: admin_get_features_optional_features ...[2024-12-15 05:19:37.721333] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:24.102 [2024-12-15 05:19:37.724350] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:24.102 passed 00:19:24.375 Test: admin_set_features_number_of_queues ...[2024-12-15 05:19:37.802107] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:24.375 [2024-12-15 05:19:37.905090] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:24.375 passed 00:19:24.375 Test: admin_get_log_page_mandatory_logs ...[2024-12-15 05:19:37.981572] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:24.375 [2024-12-15 05:19:37.984589] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:24.375 passed 00:19:24.665 Test: admin_get_log_page_with_lpo ...[2024-12-15 05:19:38.062233] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:24.665 [2024-12-15 05:19:38.131001] ctrlr.c:2700:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:19:24.665 [2024-12-15 05:19:38.144053] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:24.665 passed 00:19:24.665 Test: fabric_property_get ...[2024-12-15 05:19:38.219611] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:24.665 [2024-12-15 05:19:38.220850] vfio_user.c:5637:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:19:24.665 [2024-12-15 05:19:38.222639] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:24.665 passed 00:19:24.665 Test: admin_delete_io_sq_use_admin_qid ...[2024-12-15 05:19:38.297149] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:24.665 [2024-12-15 05:19:38.298372] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:19:24.665 [2024-12-15 05:19:38.300165] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:24.665 passed 00:19:24.984 Test: admin_delete_io_sq_delete_sq_twice ...[2024-12-15 05:19:38.374796] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:24.984 [2024-12-15 05:19:38.459001] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:24.984 [2024-12-15 05:19:38.474997] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:24.984 [2024-12-15 05:19:38.480080] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:24.984 passed 00:19:24.984 Test: admin_delete_io_cq_use_admin_qid ...[2024-12-15 05:19:38.552616] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:24.984 [2024-12-15 05:19:38.553836] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:19:24.984 [2024-12-15 05:19:38.555633] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:24.984 passed 00:19:24.984 Test: admin_delete_io_cq_delete_cq_first ...[2024-12-15 05:19:38.633203] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:25.274 [2024-12-15 05:19:38.710004] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:25.274 [2024-12-15 05:19:38.733997] vfio_user.c:2329:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:19:25.274 [2024-12-15 05:19:38.739089] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:25.274 passed 00:19:25.274 Test: admin_create_io_cq_verify_iv_pc ...[2024-12-15 05:19:38.812747] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:25.274 [2024-12-15 05:19:38.813971] vfio_user.c:2178:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:19:25.274 [2024-12-15 05:19:38.813998] vfio_user.c:2172:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:19:25.274 [2024-12-15 05:19:38.815774] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:25.274 passed 00:19:25.274 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-12-15 05:19:38.892323] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:25.544 [2024-12-15 05:19:38.984998] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:19:25.544 [2024-12-15 05:19:38.993016] vfio_user.c:2260:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:19:25.544 [2024-12-15 05:19:39.001019] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:19:25.544 [2024-12-15 05:19:39.008998] vfio_user.c:2058:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:19:25.544 [2024-12-15 05:19:39.038087] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:25.544 passed 00:19:25.544 Test: admin_create_io_sq_verify_pc ...[2024-12-15 05:19:39.111671] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:25.544 [2024-12-15 05:19:39.127004] vfio_user.c:2071:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:19:25.544 [2024-12-15 05:19:39.144838] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:25.544 passed 00:19:25.544 Test: admin_create_io_qp_max_qps ...[2024-12-15 05:19:39.222348] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:27.025 [2024-12-15 05:19:40.313004] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:19:27.025 [2024-12-15 05:19:40.689168] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:27.310 passed 00:19:27.310 Test: admin_create_io_sq_shared_cq ...[2024-12-15 05:19:40.765946] vfio_user.c:2873:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:19:27.310 [2024-12-15 05:19:40.899009] vfio_user.c:2339:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:19:27.310 [2024-12-15 05:19:40.936067] vfio_user.c:2835:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:19:27.310 passed 00:19:27.310 00:19:27.310 Run Summary: Type Total Ran Passed Failed Inactive 00:19:27.310 suites 1 1 n/a 0 0 00:19:27.310 tests 18 18 18 0 0 00:19:27.310 asserts 360 360 360 0 n/a 00:19:27.310 00:19:27.310 Elapsed time = 1.485 seconds 00:19:27.310 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 305252 00:19:27.310 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 305252 ']' 00:19:27.310 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 305252 00:19:27.310 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:19:27.310 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:27.310 05:19:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 305252 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 305252' 00:19:27.587 killing process with pid 305252 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 305252 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 305252 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:19:27.587 00:19:27.587 real 0m5.565s 00:19:27.587 user 0m15.601s 00:19:27.587 sys 0m0.490s 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:19:27.587 ************************************ 00:19:27.587 END TEST nvmf_vfio_user_nvme_compliance 00:19:27.587 ************************************ 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:27.587 ************************************ 00:19:27.587 START TEST nvmf_vfio_user_fuzz 00:19:27.587 ************************************ 00:19:27.587 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:19:27.878 * Looking for test storage... 00:19:27.878 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:27.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.878 --rc genhtml_branch_coverage=1 00:19:27.878 --rc genhtml_function_coverage=1 00:19:27.878 --rc genhtml_legend=1 00:19:27.878 --rc geninfo_all_blocks=1 00:19:27.878 --rc geninfo_unexecuted_blocks=1 00:19:27.878 00:19:27.878 ' 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:27.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.878 --rc genhtml_branch_coverage=1 00:19:27.878 --rc genhtml_function_coverage=1 00:19:27.878 --rc genhtml_legend=1 00:19:27.878 --rc geninfo_all_blocks=1 00:19:27.878 --rc geninfo_unexecuted_blocks=1 00:19:27.878 00:19:27.878 ' 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:27.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.878 --rc genhtml_branch_coverage=1 00:19:27.878 --rc genhtml_function_coverage=1 00:19:27.878 --rc genhtml_legend=1 00:19:27.878 --rc geninfo_all_blocks=1 00:19:27.878 --rc geninfo_unexecuted_blocks=1 00:19:27.878 00:19:27.878 ' 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:27.878 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:27.878 --rc genhtml_branch_coverage=1 00:19:27.878 --rc genhtml_function_coverage=1 00:19:27.878 --rc genhtml_legend=1 00:19:27.878 --rc geninfo_all_blocks=1 00:19:27.878 --rc geninfo_unexecuted_blocks=1 00:19:27.878 00:19:27.878 ' 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:27.878 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # : 0 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:19:27.879 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=306284 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 306284' 00:19:27.879 Process pid: 306284 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 306284 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 306284 ']' 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:27.879 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:27.879 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:28.165 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.165 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:19:28.165 05:19:41 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:29.165 malloc0 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:19:29.165 05:19:42 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:20:01.507 Fuzzing completed. Shutting down the fuzz application 00:20:01.507 00:20:01.507 Dumping successful admin opcodes: 00:20:01.507 9, 10, 00:20:01.507 Dumping successful io opcodes: 00:20:01.507 0, 00:20:01.507 NS: 0x20000081ef00 I/O qp, Total commands completed: 1135808, total successful commands: 4474, random_seed: 2970325568 00:20:01.507 NS: 0x20000081ef00 admin qp, Total commands completed: 279680, total successful commands: 64, random_seed: 3642738560 00:20:01.507 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 306284 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 306284 ']' 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 306284 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 306284 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 306284' 00:20:01.508 killing process with pid 306284 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 306284 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 306284 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:20:01.508 00:20:01.508 real 0m32.164s 00:20:01.508 user 0m33.086s 00:20:01.508 sys 0m27.895s 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:20:01.508 ************************************ 00:20:01.508 END TEST nvmf_vfio_user_fuzz 00:20:01.508 ************************************ 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:01.508 ************************************ 00:20:01.508 START TEST nvmf_auth_target 00:20:01.508 ************************************ 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:20:01.508 * Looking for test storage... 00:20:01.508 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lcov --version 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:01.508 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:01.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.509 --rc genhtml_branch_coverage=1 00:20:01.509 --rc genhtml_function_coverage=1 00:20:01.509 --rc genhtml_legend=1 00:20:01.509 --rc geninfo_all_blocks=1 00:20:01.509 --rc geninfo_unexecuted_blocks=1 00:20:01.509 00:20:01.509 ' 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:01.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.509 --rc genhtml_branch_coverage=1 00:20:01.509 --rc genhtml_function_coverage=1 00:20:01.509 --rc genhtml_legend=1 00:20:01.509 --rc geninfo_all_blocks=1 00:20:01.509 --rc geninfo_unexecuted_blocks=1 00:20:01.509 00:20:01.509 ' 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:01.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.509 --rc genhtml_branch_coverage=1 00:20:01.509 --rc genhtml_function_coverage=1 00:20:01.509 --rc genhtml_legend=1 00:20:01.509 --rc geninfo_all_blocks=1 00:20:01.509 --rc geninfo_unexecuted_blocks=1 00:20:01.509 00:20:01.509 ' 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:01.509 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:01.509 --rc genhtml_branch_coverage=1 00:20:01.509 --rc genhtml_function_coverage=1 00:20:01.509 --rc genhtml_legend=1 00:20:01.509 --rc geninfo_all_blocks=1 00:20:01.509 --rc geninfo_unexecuted_blocks=1 00:20:01.509 00:20:01.509 ' 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:01.509 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # : 0 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:20:01.510 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # xtrace_disable 00:20:01.510 05:20:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # pci_devs=() 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # net_devs=() 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # e810=() 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # local -ga e810 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # x722=() 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # local -ga x722 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # mlx=() 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@322 -- # local -ga mlx 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:20:05.711 Found 0000:af:00.0 (0x8086 - 0x159b) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:20:05.711 Found 0000:af:00.1 (0x8086 - 0x159b) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.711 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:20:05.712 Found net devices under 0000:af:00.0: cvl_0_0 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:20:05.712 Found net devices under 0000:af:00.1: cvl_0_1 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@442 -- # is_hw=yes 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:20:05.712 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:20:05.972 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:05.972 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:20:05.972 00:20:05.972 --- 10.0.0.2 ping statistics --- 00:20:05.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.972 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:05.972 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:05.972 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.150 ms 00:20:05.972 00:20:05.972 --- 10.0.0.1 ping statistics --- 00:20:05.972 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:05.972 rtt min/avg/max/mdev = 0.150/0.150/0.150/0.000 ms 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # return 0 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=314561 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 314561 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 314561 ']' 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:05.972 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.231 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.231 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:06.231 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=314585 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=null 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=b491049532db330b2a32c94b50e891271e38c0992f78d1e9 00:20:06.232 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.2L0 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key b491049532db330b2a32c94b50e891271e38c0992f78d1e9 0 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 b491049532db330b2a32c94b50e891271e38c0992f78d1e9 0 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=b491049532db330b2a32c94b50e891271e38c0992f78d1e9 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=0 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.2L0 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.2L0 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.2L0 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=9dafc2f182f5ad873218ce96d1d0aeee61066228569792dcba9fc116cc40938f 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.Xaj 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 9dafc2f182f5ad873218ce96d1d0aeee61066228569792dcba9fc116cc40938f 3 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 9dafc2f182f5ad873218ce96d1d0aeee61066228569792dcba9fc116cc40938f 3 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=9dafc2f182f5ad873218ce96d1d0aeee61066228569792dcba9fc116cc40938f 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:06.491 05:20:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.Xaj 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.Xaj 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Xaj 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cffa6bc939430319eefe7359f876fdac 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.N5P 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cffa6bc939430319eefe7359f876fdac 1 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cffa6bc939430319eefe7359f876fdac 1 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cffa6bc939430319eefe7359f876fdac 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.N5P 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.N5P 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.N5P 00:20:06.491 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=cb661a63306212765782305ef5730af582ec310310fbbf96 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.uV4 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key cb661a63306212765782305ef5730af582ec310310fbbf96 2 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 cb661a63306212765782305ef5730af582ec310310fbbf96 2 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=cb661a63306212765782305ef5730af582ec310310fbbf96 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.uV4 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.uV4 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.uV4 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha384 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=48 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=109d50d748e7bb054d677d80ab84e1776c1aa2eff64a4375 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.zDE 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 109d50d748e7bb054d677d80ab84e1776c1aa2eff64a4375 2 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 109d50d748e7bb054d677d80ab84e1776c1aa2eff64a4375 2 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=109d50d748e7bb054d677d80ab84e1776c1aa2eff64a4375 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=2 00:20:06.492 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.zDE 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.zDE 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.zDE 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha256 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=32 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=6244f9de7726785f23e133cd0df7bf62 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.Yw4 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key 6244f9de7726785f23e133cd0df7bf62 1 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 6244f9de7726785f23e133cd0df7bf62 1 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=6244f9de7726785f23e133cd0df7bf62 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=1 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.Yw4 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.Yw4 00:20:06.751 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.Yw4 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@751 -- # local digest len file key 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@752 -- # local -A digests 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # digest=sha512 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@754 -- # len=64 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@755 -- # key=e781e552f0613174f235339049ff9ffc334f3664720b480a3c8a8e63ee400579 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.23o 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@757 -- # format_dhchap_key e781e552f0613174f235339049ff9ffc334f3664720b480a3c8a8e63ee400579 3 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@747 -- # format_key DHHC-1 e781e552f0613174f235339049ff9ffc334f3664720b480a3c8a8e63ee400579 3 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # local prefix key digest 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # key=e781e552f0613174f235339049ff9ffc334f3664720b480a3c8a8e63ee400579 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # digest=3 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@733 -- # python - 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.23o 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.23o 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.23o 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 314561 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 314561 ']' 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.752 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.011 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.011 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:07.011 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 314585 /var/tmp/host.sock 00:20:07.011 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 314585 ']' 00:20:07.011 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:20:07.011 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:07.011 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:20:07.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:20:07.011 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:07.011 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.271 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:07.271 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:20:07.271 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:20:07.271 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.271 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.271 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.271 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:07.271 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2L0 00:20:07.271 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.271 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.271 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.271 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.2L0 00:20:07.271 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.2L0 00:20:07.530 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Xaj ]] 00:20:07.530 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Xaj 00:20:07.530 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.530 05:20:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.530 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.530 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Xaj 00:20:07.530 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Xaj 00:20:07.530 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:07.530 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.N5P 00:20:07.530 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.530 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.530 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.530 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.N5P 00:20:07.530 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.N5P 00:20:07.788 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.uV4 ]] 00:20:07.788 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uV4 00:20:07.788 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:07.788 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:07.788 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:07.788 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uV4 00:20:07.788 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uV4 00:20:08.047 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:08.047 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.zDE 00:20:08.047 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.047 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.047 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.047 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.zDE 00:20:08.047 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.zDE 00:20:08.306 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.Yw4 ]] 00:20:08.306 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Yw4 00:20:08.306 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.306 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.306 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.306 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Yw4 00:20:08.306 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Yw4 00:20:08.566 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:20:08.566 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.23o 00:20:08.566 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.566 05:20:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.566 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.566 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.23o 00:20:08.566 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.23o 00:20:08.566 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:20:08.566 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:08.566 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:08.566 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:08.566 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:08.566 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:08.825 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:20:08.825 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:08.825 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:08.825 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:08.825 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:08.825 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:08.825 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.825 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.825 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:08.825 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.825 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.825 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:08.825 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:09.084 00:20:09.084 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:09.084 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:09.084 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:09.343 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:09.343 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:09.343 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:09.343 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:09.343 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:09.343 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:09.343 { 00:20:09.343 "cntlid": 1, 00:20:09.343 "qid": 0, 00:20:09.343 "state": "enabled", 00:20:09.343 "thread": "nvmf_tgt_poll_group_000", 00:20:09.343 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:09.343 "listen_address": { 00:20:09.343 "trtype": "TCP", 00:20:09.343 "adrfam": "IPv4", 00:20:09.343 "traddr": "10.0.0.2", 00:20:09.343 "trsvcid": "4420" 00:20:09.343 }, 00:20:09.343 "peer_address": { 00:20:09.343 "trtype": "TCP", 00:20:09.343 "adrfam": "IPv4", 00:20:09.343 "traddr": "10.0.0.1", 00:20:09.343 "trsvcid": "52552" 00:20:09.343 }, 00:20:09.344 "auth": { 00:20:09.344 "state": "completed", 00:20:09.344 "digest": "sha256", 00:20:09.344 "dhgroup": "null" 00:20:09.344 } 00:20:09.344 } 00:20:09.344 ]' 00:20:09.344 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:09.344 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:09.344 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:09.344 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:09.344 05:20:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:09.344 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:09.344 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:09.344 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:09.603 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:09.603 05:20:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:12.896 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:12.896 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:12.896 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:12.896 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:12.896 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:12.896 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:12.896 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:12.896 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:12.896 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:13.156 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:20:13.156 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:13.156 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:13.156 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:13.156 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:13.156 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:13.156 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.156 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.156 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.156 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.156 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.156 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.156 05:20:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:13.415 00:20:13.415 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:13.415 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:13.415 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:13.674 { 00:20:13.674 "cntlid": 3, 00:20:13.674 "qid": 0, 00:20:13.674 "state": "enabled", 00:20:13.674 "thread": "nvmf_tgt_poll_group_000", 00:20:13.674 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:13.674 "listen_address": { 00:20:13.674 "trtype": "TCP", 00:20:13.674 "adrfam": "IPv4", 00:20:13.674 "traddr": "10.0.0.2", 00:20:13.674 "trsvcid": "4420" 00:20:13.674 }, 00:20:13.674 "peer_address": { 00:20:13.674 "trtype": "TCP", 00:20:13.674 "adrfam": "IPv4", 00:20:13.674 "traddr": "10.0.0.1", 00:20:13.674 "trsvcid": "47856" 00:20:13.674 }, 00:20:13.674 "auth": { 00:20:13.674 "state": "completed", 00:20:13.674 "digest": "sha256", 00:20:13.674 "dhgroup": "null" 00:20:13.674 } 00:20:13.674 } 00:20:13.674 ]' 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:13.674 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:13.934 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:13.934 05:20:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:14.502 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:14.502 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:14.502 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:14.502 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.502 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.502 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.502 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:14.502 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:14.502 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:14.762 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:20:14.762 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:14.762 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:14.762 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:14.762 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:14.762 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:14.762 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.762 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:14.762 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:14.762 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:14.762 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.762 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:14.762 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:15.021 00:20:15.021 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:15.021 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:15.021 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:15.281 { 00:20:15.281 "cntlid": 5, 00:20:15.281 "qid": 0, 00:20:15.281 "state": "enabled", 00:20:15.281 "thread": "nvmf_tgt_poll_group_000", 00:20:15.281 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:15.281 "listen_address": { 00:20:15.281 "trtype": "TCP", 00:20:15.281 "adrfam": "IPv4", 00:20:15.281 "traddr": "10.0.0.2", 00:20:15.281 "trsvcid": "4420" 00:20:15.281 }, 00:20:15.281 "peer_address": { 00:20:15.281 "trtype": "TCP", 00:20:15.281 "adrfam": "IPv4", 00:20:15.281 "traddr": "10.0.0.1", 00:20:15.281 "trsvcid": "47882" 00:20:15.281 }, 00:20:15.281 "auth": { 00:20:15.281 "state": "completed", 00:20:15.281 "digest": "sha256", 00:20:15.281 "dhgroup": "null" 00:20:15.281 } 00:20:15.281 } 00:20:15.281 ]' 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:15.281 05:20:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:15.540 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:15.540 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:16.108 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:16.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:16.108 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:16.108 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.108 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.108 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.108 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:16.108 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:16.108 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:20:16.366 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:20:16.366 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:16.367 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:16.367 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:16.367 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:16.367 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:16.367 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:16.367 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.367 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.367 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.367 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:16.367 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.367 05:20:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:16.626 00:20:16.626 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:16.626 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:16.626 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:16.885 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:16.885 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:16.885 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.885 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:16.885 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.885 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:16.885 { 00:20:16.885 "cntlid": 7, 00:20:16.885 "qid": 0, 00:20:16.885 "state": "enabled", 00:20:16.885 "thread": "nvmf_tgt_poll_group_000", 00:20:16.885 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:16.885 "listen_address": { 00:20:16.885 "trtype": "TCP", 00:20:16.885 "adrfam": "IPv4", 00:20:16.885 "traddr": "10.0.0.2", 00:20:16.885 "trsvcid": "4420" 00:20:16.885 }, 00:20:16.885 "peer_address": { 00:20:16.885 "trtype": "TCP", 00:20:16.885 "adrfam": "IPv4", 00:20:16.885 "traddr": "10.0.0.1", 00:20:16.885 "trsvcid": "47900" 00:20:16.886 }, 00:20:16.886 "auth": { 00:20:16.886 "state": "completed", 00:20:16.886 "digest": "sha256", 00:20:16.886 "dhgroup": "null" 00:20:16.886 } 00:20:16.886 } 00:20:16.886 ]' 00:20:16.886 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:16.886 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:16.886 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:16.886 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:16.886 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:16.886 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:16.886 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:16.886 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:17.145 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:17.146 05:20:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:17.714 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:17.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:17.714 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:17.714 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.714 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.714 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.714 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:17.714 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:17.714 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.714 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:17.973 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:20:17.973 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:17.973 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:17.973 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:17.973 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:17.973 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:17.973 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.973 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.973 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:17.973 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.973 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.973 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:17.973 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:18.234 00:20:18.234 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:18.234 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:18.234 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:18.492 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:18.492 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:18.492 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:18.492 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:18.493 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:18.493 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:18.493 { 00:20:18.493 "cntlid": 9, 00:20:18.493 "qid": 0, 00:20:18.493 "state": "enabled", 00:20:18.493 "thread": "nvmf_tgt_poll_group_000", 00:20:18.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:18.493 "listen_address": { 00:20:18.493 "trtype": "TCP", 00:20:18.493 "adrfam": "IPv4", 00:20:18.493 "traddr": "10.0.0.2", 00:20:18.493 "trsvcid": "4420" 00:20:18.493 }, 00:20:18.493 "peer_address": { 00:20:18.493 "trtype": "TCP", 00:20:18.493 "adrfam": "IPv4", 00:20:18.493 "traddr": "10.0.0.1", 00:20:18.493 "trsvcid": "47922" 00:20:18.493 }, 00:20:18.493 "auth": { 00:20:18.493 "state": "completed", 00:20:18.493 "digest": "sha256", 00:20:18.493 "dhgroup": "ffdhe2048" 00:20:18.493 } 00:20:18.493 } 00:20:18.493 ]' 00:20:18.493 05:20:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:18.493 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:18.493 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:18.493 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:18.493 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:18.493 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:18.493 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:18.493 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:18.752 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:18.752 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:19.320 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:19.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:19.320 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:19.320 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.320 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.320 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.320 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:19.320 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.320 05:20:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:19.579 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:20:19.579 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:19.579 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:19.579 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:19.579 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:19.579 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:19.579 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.579 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:19.579 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:19.579 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:19.580 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.580 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.580 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:19.839 00:20:19.839 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:19.839 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:19.839 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:20.099 { 00:20:20.099 "cntlid": 11, 00:20:20.099 "qid": 0, 00:20:20.099 "state": "enabled", 00:20:20.099 "thread": "nvmf_tgt_poll_group_000", 00:20:20.099 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:20.099 "listen_address": { 00:20:20.099 "trtype": "TCP", 00:20:20.099 "adrfam": "IPv4", 00:20:20.099 "traddr": "10.0.0.2", 00:20:20.099 "trsvcid": "4420" 00:20:20.099 }, 00:20:20.099 "peer_address": { 00:20:20.099 "trtype": "TCP", 00:20:20.099 "adrfam": "IPv4", 00:20:20.099 "traddr": "10.0.0.1", 00:20:20.099 "trsvcid": "47950" 00:20:20.099 }, 00:20:20.099 "auth": { 00:20:20.099 "state": "completed", 00:20:20.099 "digest": "sha256", 00:20:20.099 "dhgroup": "ffdhe2048" 00:20:20.099 } 00:20:20.099 } 00:20:20.099 ]' 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:20.099 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:20.358 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:20.358 05:20:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:20.927 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:20.927 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:20.927 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:20.927 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:20.927 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:20.927 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:20.927 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:20.927 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:20.927 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:21.186 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:20:21.186 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:21.186 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:21.186 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:21.186 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:21.186 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:21.186 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.186 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.186 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.186 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.186 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.186 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.186 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:21.445 00:20:21.445 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:21.445 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:21.445 05:20:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:21.445 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:21.445 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:21.445 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:21.445 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:21.445 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:21.445 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:21.445 { 00:20:21.445 "cntlid": 13, 00:20:21.445 "qid": 0, 00:20:21.445 "state": "enabled", 00:20:21.445 "thread": "nvmf_tgt_poll_group_000", 00:20:21.445 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:21.445 "listen_address": { 00:20:21.445 "trtype": "TCP", 00:20:21.445 "adrfam": "IPv4", 00:20:21.445 "traddr": "10.0.0.2", 00:20:21.445 "trsvcid": "4420" 00:20:21.445 }, 00:20:21.445 "peer_address": { 00:20:21.445 "trtype": "TCP", 00:20:21.445 "adrfam": "IPv4", 00:20:21.445 "traddr": "10.0.0.1", 00:20:21.445 "trsvcid": "47982" 00:20:21.445 }, 00:20:21.445 "auth": { 00:20:21.445 "state": "completed", 00:20:21.445 "digest": "sha256", 00:20:21.445 "dhgroup": "ffdhe2048" 00:20:21.445 } 00:20:21.445 } 00:20:21.445 ]' 00:20:21.445 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:21.704 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:21.704 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:21.704 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:21.704 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:21.704 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:21.704 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:21.704 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:21.962 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:21.962 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:22.531 05:20:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:22.531 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.531 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:22.790 00:20:22.790 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:22.790 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:22.790 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:23.049 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:23.049 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:23.049 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.049 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.049 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.049 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:23.049 { 00:20:23.049 "cntlid": 15, 00:20:23.049 "qid": 0, 00:20:23.049 "state": "enabled", 00:20:23.049 "thread": "nvmf_tgt_poll_group_000", 00:20:23.049 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:23.049 "listen_address": { 00:20:23.049 "trtype": "TCP", 00:20:23.049 "adrfam": "IPv4", 00:20:23.049 "traddr": "10.0.0.2", 00:20:23.049 "trsvcid": "4420" 00:20:23.049 }, 00:20:23.049 "peer_address": { 00:20:23.049 "trtype": "TCP", 00:20:23.049 "adrfam": "IPv4", 00:20:23.049 "traddr": "10.0.0.1", 00:20:23.049 "trsvcid": "48008" 00:20:23.049 }, 00:20:23.049 "auth": { 00:20:23.049 "state": "completed", 00:20:23.049 "digest": "sha256", 00:20:23.049 "dhgroup": "ffdhe2048" 00:20:23.049 } 00:20:23.049 } 00:20:23.049 ]' 00:20:23.049 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:23.049 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:23.049 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:23.309 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:23.309 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:23.309 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:23.309 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:23.309 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:23.309 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:23.309 05:20:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:23.876 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:23.876 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:23.876 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:23.876 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:23.876 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:23.876 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:23.876 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:23.876 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:23.876 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:23.876 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:24.135 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:20:24.135 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:24.136 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:24.136 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:24.136 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:24.136 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:24.136 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.136 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.136 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.136 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.136 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.136 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.136 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:24.394 00:20:24.394 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:24.394 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:24.394 05:20:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:24.653 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:24.653 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:24.654 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:24.654 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:24.654 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:24.654 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:24.654 { 00:20:24.654 "cntlid": 17, 00:20:24.654 "qid": 0, 00:20:24.654 "state": "enabled", 00:20:24.654 "thread": "nvmf_tgt_poll_group_000", 00:20:24.654 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:24.654 "listen_address": { 00:20:24.654 "trtype": "TCP", 00:20:24.654 "adrfam": "IPv4", 00:20:24.654 "traddr": "10.0.0.2", 00:20:24.654 "trsvcid": "4420" 00:20:24.654 }, 00:20:24.654 "peer_address": { 00:20:24.654 "trtype": "TCP", 00:20:24.654 "adrfam": "IPv4", 00:20:24.654 "traddr": "10.0.0.1", 00:20:24.654 "trsvcid": "42430" 00:20:24.654 }, 00:20:24.654 "auth": { 00:20:24.654 "state": "completed", 00:20:24.654 "digest": "sha256", 00:20:24.654 "dhgroup": "ffdhe3072" 00:20:24.654 } 00:20:24.654 } 00:20:24.654 ]' 00:20:24.654 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:24.654 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:24.654 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:24.654 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:24.654 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:24.654 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:24.654 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:24.654 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:24.913 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:24.913 05:20:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:25.481 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:25.481 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:25.481 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:25.481 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.481 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.481 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.481 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:25.481 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.481 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:25.742 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:20:25.742 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:25.742 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:25.742 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:25.742 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:25.742 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:25.742 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.742 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:25.742 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:25.742 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:25.742 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.742 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:25.742 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:26.000 00:20:26.000 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:26.000 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:26.000 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:26.259 { 00:20:26.259 "cntlid": 19, 00:20:26.259 "qid": 0, 00:20:26.259 "state": "enabled", 00:20:26.259 "thread": "nvmf_tgt_poll_group_000", 00:20:26.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:26.259 "listen_address": { 00:20:26.259 "trtype": "TCP", 00:20:26.259 "adrfam": "IPv4", 00:20:26.259 "traddr": "10.0.0.2", 00:20:26.259 "trsvcid": "4420" 00:20:26.259 }, 00:20:26.259 "peer_address": { 00:20:26.259 "trtype": "TCP", 00:20:26.259 "adrfam": "IPv4", 00:20:26.259 "traddr": "10.0.0.1", 00:20:26.259 "trsvcid": "42464" 00:20:26.259 }, 00:20:26.259 "auth": { 00:20:26.259 "state": "completed", 00:20:26.259 "digest": "sha256", 00:20:26.259 "dhgroup": "ffdhe3072" 00:20:26.259 } 00:20:26.259 } 00:20:26.259 ]' 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:26.259 05:20:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:26.518 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:26.518 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:27.086 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:27.086 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:27.086 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:27.087 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.087 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.087 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.087 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:27.087 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.087 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:27.346 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:20:27.346 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:27.346 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:27.346 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:27.346 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:27.346 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:27.346 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.346 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.346 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.346 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.346 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.346 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.346 05:20:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:27.605 00:20:27.605 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:27.605 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:27.605 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:27.864 { 00:20:27.864 "cntlid": 21, 00:20:27.864 "qid": 0, 00:20:27.864 "state": "enabled", 00:20:27.864 "thread": "nvmf_tgt_poll_group_000", 00:20:27.864 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:27.864 "listen_address": { 00:20:27.864 "trtype": "TCP", 00:20:27.864 "adrfam": "IPv4", 00:20:27.864 "traddr": "10.0.0.2", 00:20:27.864 "trsvcid": "4420" 00:20:27.864 }, 00:20:27.864 "peer_address": { 00:20:27.864 "trtype": "TCP", 00:20:27.864 "adrfam": "IPv4", 00:20:27.864 "traddr": "10.0.0.1", 00:20:27.864 "trsvcid": "42486" 00:20:27.864 }, 00:20:27.864 "auth": { 00:20:27.864 "state": "completed", 00:20:27.864 "digest": "sha256", 00:20:27.864 "dhgroup": "ffdhe3072" 00:20:27.864 } 00:20:27.864 } 00:20:27.864 ]' 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:27.864 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:28.123 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:28.123 05:20:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:28.691 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:28.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:28.691 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:28.691 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.691 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.691 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.691 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:28.691 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:28.691 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:20:28.951 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:20:28.951 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:28.951 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:28.951 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:20:28.951 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:28.951 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:28.951 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:28.951 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:28.951 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:28.951 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:28.951 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:28.951 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:28.951 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:29.210 00:20:29.210 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:29.210 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:29.210 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:29.469 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:29.469 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:29.469 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.469 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:29.469 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.469 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:29.469 { 00:20:29.469 "cntlid": 23, 00:20:29.469 "qid": 0, 00:20:29.469 "state": "enabled", 00:20:29.469 "thread": "nvmf_tgt_poll_group_000", 00:20:29.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:29.469 "listen_address": { 00:20:29.469 "trtype": "TCP", 00:20:29.469 "adrfam": "IPv4", 00:20:29.469 "traddr": "10.0.0.2", 00:20:29.469 "trsvcid": "4420" 00:20:29.469 }, 00:20:29.469 "peer_address": { 00:20:29.469 "trtype": "TCP", 00:20:29.469 "adrfam": "IPv4", 00:20:29.469 "traddr": "10.0.0.1", 00:20:29.469 "trsvcid": "42512" 00:20:29.469 }, 00:20:29.469 "auth": { 00:20:29.469 "state": "completed", 00:20:29.469 "digest": "sha256", 00:20:29.469 "dhgroup": "ffdhe3072" 00:20:29.469 } 00:20:29.469 } 00:20:29.469 ]' 00:20:29.469 05:20:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:29.469 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:29.469 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:29.469 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:20:29.469 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:29.469 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:29.469 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:29.469 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:29.728 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:29.728 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:30.296 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:30.296 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:30.296 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:30.296 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.296 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.296 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.296 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:30.296 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:30.296 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:30.296 05:20:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:30.555 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:20:30.555 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:30.555 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:30.555 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:30.555 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:30.555 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:30.555 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.555 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:30.555 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:30.555 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.555 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.555 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.555 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:30.814 00:20:30.814 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:30.814 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:30.814 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:31.073 { 00:20:31.073 "cntlid": 25, 00:20:31.073 "qid": 0, 00:20:31.073 "state": "enabled", 00:20:31.073 "thread": "nvmf_tgt_poll_group_000", 00:20:31.073 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:31.073 "listen_address": { 00:20:31.073 "trtype": "TCP", 00:20:31.073 "adrfam": "IPv4", 00:20:31.073 "traddr": "10.0.0.2", 00:20:31.073 "trsvcid": "4420" 00:20:31.073 }, 00:20:31.073 "peer_address": { 00:20:31.073 "trtype": "TCP", 00:20:31.073 "adrfam": "IPv4", 00:20:31.073 "traddr": "10.0.0.1", 00:20:31.073 "trsvcid": "42532" 00:20:31.073 }, 00:20:31.073 "auth": { 00:20:31.073 "state": "completed", 00:20:31.073 "digest": "sha256", 00:20:31.073 "dhgroup": "ffdhe4096" 00:20:31.073 } 00:20:31.073 } 00:20:31.073 ]' 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:31.073 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:31.332 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:31.332 05:20:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:31.908 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:31.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:31.908 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:31.908 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:31.908 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:31.908 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:31.908 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:31.908 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:31.908 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:32.167 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:20:32.167 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:32.167 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:32.167 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:32.167 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:32.167 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:32.167 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.167 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.167 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.167 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.167 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.167 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.167 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:32.425 00:20:32.425 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:32.425 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:32.425 05:20:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:32.684 { 00:20:32.684 "cntlid": 27, 00:20:32.684 "qid": 0, 00:20:32.684 "state": "enabled", 00:20:32.684 "thread": "nvmf_tgt_poll_group_000", 00:20:32.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:32.684 "listen_address": { 00:20:32.684 "trtype": "TCP", 00:20:32.684 "adrfam": "IPv4", 00:20:32.684 "traddr": "10.0.0.2", 00:20:32.684 "trsvcid": "4420" 00:20:32.684 }, 00:20:32.684 "peer_address": { 00:20:32.684 "trtype": "TCP", 00:20:32.684 "adrfam": "IPv4", 00:20:32.684 "traddr": "10.0.0.1", 00:20:32.684 "trsvcid": "42564" 00:20:32.684 }, 00:20:32.684 "auth": { 00:20:32.684 "state": "completed", 00:20:32.684 "digest": "sha256", 00:20:32.684 "dhgroup": "ffdhe4096" 00:20:32.684 } 00:20:32.684 } 00:20:32.684 ]' 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:32.684 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:32.942 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:32.942 05:20:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:33.510 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:33.510 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:33.510 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:33.510 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.510 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.510 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.510 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:33.510 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.510 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:33.769 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:20:33.769 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:33.769 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:33.769 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:33.769 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:33.769 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:33.769 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.769 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:33.769 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:33.769 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:33.769 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.769 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:33.769 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:34.028 00:20:34.028 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:34.028 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:34.028 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:34.287 { 00:20:34.287 "cntlid": 29, 00:20:34.287 "qid": 0, 00:20:34.287 "state": "enabled", 00:20:34.287 "thread": "nvmf_tgt_poll_group_000", 00:20:34.287 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:34.287 "listen_address": { 00:20:34.287 "trtype": "TCP", 00:20:34.287 "adrfam": "IPv4", 00:20:34.287 "traddr": "10.0.0.2", 00:20:34.287 "trsvcid": "4420" 00:20:34.287 }, 00:20:34.287 "peer_address": { 00:20:34.287 "trtype": "TCP", 00:20:34.287 "adrfam": "IPv4", 00:20:34.287 "traddr": "10.0.0.1", 00:20:34.287 "trsvcid": "57160" 00:20:34.287 }, 00:20:34.287 "auth": { 00:20:34.287 "state": "completed", 00:20:34.287 "digest": "sha256", 00:20:34.287 "dhgroup": "ffdhe4096" 00:20:34.287 } 00:20:34.287 } 00:20:34.287 ]' 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:34.287 05:20:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:34.546 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:34.546 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:35.114 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:35.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:35.114 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:35.114 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.114 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.114 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.114 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:35.114 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.114 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:20:35.373 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:20:35.373 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:35.373 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:35.373 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:20:35.373 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:35.373 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:35.373 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:35.373 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.373 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.373 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.373 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:35.373 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.373 05:20:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:35.631 00:20:35.631 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:35.631 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:35.631 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:35.890 { 00:20:35.890 "cntlid": 31, 00:20:35.890 "qid": 0, 00:20:35.890 "state": "enabled", 00:20:35.890 "thread": "nvmf_tgt_poll_group_000", 00:20:35.890 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:35.890 "listen_address": { 00:20:35.890 "trtype": "TCP", 00:20:35.890 "adrfam": "IPv4", 00:20:35.890 "traddr": "10.0.0.2", 00:20:35.890 "trsvcid": "4420" 00:20:35.890 }, 00:20:35.890 "peer_address": { 00:20:35.890 "trtype": "TCP", 00:20:35.890 "adrfam": "IPv4", 00:20:35.890 "traddr": "10.0.0.1", 00:20:35.890 "trsvcid": "57204" 00:20:35.890 }, 00:20:35.890 "auth": { 00:20:35.890 "state": "completed", 00:20:35.890 "digest": "sha256", 00:20:35.890 "dhgroup": "ffdhe4096" 00:20:35.890 } 00:20:35.890 } 00:20:35.890 ]' 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:35.890 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:36.148 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:36.148 05:20:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:36.714 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:36.714 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:36.714 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:36.714 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.714 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.714 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.714 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:36.714 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:36.714 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.714 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:36.973 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:20:36.973 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:36.973 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:36.973 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:36.973 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:36.973 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:36.973 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.973 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:36.973 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:36.973 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:36.973 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.973 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:36.973 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:37.232 00:20:37.232 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:37.232 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:37.232 05:20:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:37.491 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:37.491 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:37.491 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:37.491 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:37.491 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:37.491 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:37.491 { 00:20:37.491 "cntlid": 33, 00:20:37.491 "qid": 0, 00:20:37.491 "state": "enabled", 00:20:37.491 "thread": "nvmf_tgt_poll_group_000", 00:20:37.491 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:37.491 "listen_address": { 00:20:37.491 "trtype": "TCP", 00:20:37.491 "adrfam": "IPv4", 00:20:37.491 "traddr": "10.0.0.2", 00:20:37.491 "trsvcid": "4420" 00:20:37.491 }, 00:20:37.491 "peer_address": { 00:20:37.491 "trtype": "TCP", 00:20:37.491 "adrfam": "IPv4", 00:20:37.491 "traddr": "10.0.0.1", 00:20:37.491 "trsvcid": "57228" 00:20:37.491 }, 00:20:37.491 "auth": { 00:20:37.491 "state": "completed", 00:20:37.491 "digest": "sha256", 00:20:37.491 "dhgroup": "ffdhe6144" 00:20:37.491 } 00:20:37.491 } 00:20:37.491 ]' 00:20:37.491 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:37.491 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:37.491 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:37.491 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:37.491 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:37.750 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:37.750 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:37.750 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:37.750 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:37.750 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:38.318 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:38.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:38.318 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:38.318 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.318 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.318 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.318 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:38.318 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.318 05:20:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:38.577 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:20:38.577 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:38.577 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:38.577 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:38.577 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:38.577 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:38.577 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.577 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:38.577 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:38.577 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:38.577 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.577 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.577 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:38.836 00:20:38.836 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:38.836 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:38.836 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:39.095 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:39.095 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:39.095 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:39.095 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:39.095 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:39.095 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:39.095 { 00:20:39.095 "cntlid": 35, 00:20:39.095 "qid": 0, 00:20:39.095 "state": "enabled", 00:20:39.095 "thread": "nvmf_tgt_poll_group_000", 00:20:39.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:39.095 "listen_address": { 00:20:39.095 "trtype": "TCP", 00:20:39.095 "adrfam": "IPv4", 00:20:39.095 "traddr": "10.0.0.2", 00:20:39.095 "trsvcid": "4420" 00:20:39.095 }, 00:20:39.095 "peer_address": { 00:20:39.095 "trtype": "TCP", 00:20:39.095 "adrfam": "IPv4", 00:20:39.095 "traddr": "10.0.0.1", 00:20:39.095 "trsvcid": "57252" 00:20:39.095 }, 00:20:39.095 "auth": { 00:20:39.095 "state": "completed", 00:20:39.095 "digest": "sha256", 00:20:39.095 "dhgroup": "ffdhe6144" 00:20:39.095 } 00:20:39.095 } 00:20:39.095 ]' 00:20:39.095 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:39.095 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:39.095 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:39.354 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:39.354 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:39.354 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:39.354 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:39.354 05:20:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:39.354 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:39.354 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:39.921 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:39.921 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.181 05:20:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:40.748 00:20:40.748 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:40.748 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:40.748 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:40.748 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:40.748 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:40.748 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.748 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:40.748 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.748 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:40.748 { 00:20:40.748 "cntlid": 37, 00:20:40.748 "qid": 0, 00:20:40.748 "state": "enabled", 00:20:40.748 "thread": "nvmf_tgt_poll_group_000", 00:20:40.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:40.748 "listen_address": { 00:20:40.748 "trtype": "TCP", 00:20:40.748 "adrfam": "IPv4", 00:20:40.748 "traddr": "10.0.0.2", 00:20:40.748 "trsvcid": "4420" 00:20:40.748 }, 00:20:40.748 "peer_address": { 00:20:40.748 "trtype": "TCP", 00:20:40.748 "adrfam": "IPv4", 00:20:40.748 "traddr": "10.0.0.1", 00:20:40.748 "trsvcid": "57286" 00:20:40.748 }, 00:20:40.748 "auth": { 00:20:40.748 "state": "completed", 00:20:40.748 "digest": "sha256", 00:20:40.748 "dhgroup": "ffdhe6144" 00:20:40.748 } 00:20:40.748 } 00:20:40.748 ]' 00:20:40.748 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:40.748 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:40.748 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:41.007 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:41.007 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:41.007 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:41.007 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:41.007 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:41.007 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:41.007 05:20:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:41.577 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:41.577 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:41.577 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:41.577 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.837 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.837 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.837 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:41.837 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:41.838 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:42.404 00:20:42.404 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:42.404 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:42.404 05:20:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:42.404 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:42.404 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:42.404 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.404 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:42.404 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.404 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:42.404 { 00:20:42.404 "cntlid": 39, 00:20:42.404 "qid": 0, 00:20:42.404 "state": "enabled", 00:20:42.404 "thread": "nvmf_tgt_poll_group_000", 00:20:42.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:42.404 "listen_address": { 00:20:42.404 "trtype": "TCP", 00:20:42.404 "adrfam": "IPv4", 00:20:42.404 "traddr": "10.0.0.2", 00:20:42.404 "trsvcid": "4420" 00:20:42.404 }, 00:20:42.404 "peer_address": { 00:20:42.404 "trtype": "TCP", 00:20:42.404 "adrfam": "IPv4", 00:20:42.404 "traddr": "10.0.0.1", 00:20:42.404 "trsvcid": "57312" 00:20:42.404 }, 00:20:42.404 "auth": { 00:20:42.404 "state": "completed", 00:20:42.404 "digest": "sha256", 00:20:42.404 "dhgroup": "ffdhe6144" 00:20:42.404 } 00:20:42.404 } 00:20:42.404 ]' 00:20:42.404 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:42.404 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:42.404 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:42.663 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:20:42.663 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:42.663 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:42.663 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:42.663 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:42.663 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:42.663 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:43.230 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:43.230 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:43.230 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:43.230 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.230 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.230 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.230 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:43.230 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:43.230 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:43.230 05:20:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:43.489 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:20:43.489 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:43.489 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:43.489 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:43.489 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:43.489 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:43.489 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.489 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.489 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:43.489 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.489 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.490 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:43.490 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:44.057 00:20:44.057 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:44.057 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:44.057 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:44.316 { 00:20:44.316 "cntlid": 41, 00:20:44.316 "qid": 0, 00:20:44.316 "state": "enabled", 00:20:44.316 "thread": "nvmf_tgt_poll_group_000", 00:20:44.316 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:44.316 "listen_address": { 00:20:44.316 "trtype": "TCP", 00:20:44.316 "adrfam": "IPv4", 00:20:44.316 "traddr": "10.0.0.2", 00:20:44.316 "trsvcid": "4420" 00:20:44.316 }, 00:20:44.316 "peer_address": { 00:20:44.316 "trtype": "TCP", 00:20:44.316 "adrfam": "IPv4", 00:20:44.316 "traddr": "10.0.0.1", 00:20:44.316 "trsvcid": "49868" 00:20:44.316 }, 00:20:44.316 "auth": { 00:20:44.316 "state": "completed", 00:20:44.316 "digest": "sha256", 00:20:44.316 "dhgroup": "ffdhe8192" 00:20:44.316 } 00:20:44.316 } 00:20:44.316 ]' 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:44.316 05:20:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:44.575 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:44.575 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:45.143 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:45.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:45.143 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:45.143 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.143 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.143 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.143 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:45.143 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:45.143 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:45.402 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:20:45.402 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:45.402 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:45.402 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:45.402 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:45.402 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:45.402 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.402 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.402 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.402 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.402 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.402 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.402 05:20:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:45.969 00:20:45.969 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:45.969 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:45.969 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:45.969 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:45.969 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:45.969 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:45.970 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:45.970 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:45.970 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:45.970 { 00:20:45.970 "cntlid": 43, 00:20:45.970 "qid": 0, 00:20:45.970 "state": "enabled", 00:20:45.970 "thread": "nvmf_tgt_poll_group_000", 00:20:45.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:45.970 "listen_address": { 00:20:45.970 "trtype": "TCP", 00:20:45.970 "adrfam": "IPv4", 00:20:45.970 "traddr": "10.0.0.2", 00:20:45.970 "trsvcid": "4420" 00:20:45.970 }, 00:20:45.970 "peer_address": { 00:20:45.970 "trtype": "TCP", 00:20:45.970 "adrfam": "IPv4", 00:20:45.970 "traddr": "10.0.0.1", 00:20:45.970 "trsvcid": "49892" 00:20:45.970 }, 00:20:45.970 "auth": { 00:20:45.970 "state": "completed", 00:20:45.970 "digest": "sha256", 00:20:45.970 "dhgroup": "ffdhe8192" 00:20:45.970 } 00:20:45.970 } 00:20:45.970 ]' 00:20:45.970 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:46.228 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:46.228 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:46.228 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:46.228 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:46.228 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:46.228 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:46.228 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:46.486 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:46.486 05:20:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:47.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.054 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.313 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.313 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.313 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.313 05:21:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:47.572 00:20:47.572 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:47.572 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:47.572 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:47.836 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:47.836 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:47.836 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:47.836 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:47.836 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:47.836 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:47.836 { 00:20:47.836 "cntlid": 45, 00:20:47.836 "qid": 0, 00:20:47.836 "state": "enabled", 00:20:47.836 "thread": "nvmf_tgt_poll_group_000", 00:20:47.836 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:47.836 "listen_address": { 00:20:47.836 "trtype": "TCP", 00:20:47.836 "adrfam": "IPv4", 00:20:47.836 "traddr": "10.0.0.2", 00:20:47.836 "trsvcid": "4420" 00:20:47.836 }, 00:20:47.836 "peer_address": { 00:20:47.836 "trtype": "TCP", 00:20:47.836 "adrfam": "IPv4", 00:20:47.836 "traddr": "10.0.0.1", 00:20:47.836 "trsvcid": "49912" 00:20:47.836 }, 00:20:47.836 "auth": { 00:20:47.836 "state": "completed", 00:20:47.836 "digest": "sha256", 00:20:47.836 "dhgroup": "ffdhe8192" 00:20:47.836 } 00:20:47.836 } 00:20:47.836 ]' 00:20:47.836 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:47.836 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:47.836 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:48.098 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:48.098 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:48.098 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:48.098 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:48.098 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:48.098 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:48.098 05:21:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:48.666 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:48.666 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:48.666 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:48.666 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.666 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.666 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.666 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:48.666 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:48.666 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:20:48.925 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:20:48.925 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:48.925 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:20:48.925 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:20:48.925 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:48.925 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:48.925 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:48.925 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.925 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:48.925 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.925 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:48.925 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:48.925 05:21:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:49.495 00:20:49.495 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:49.495 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:49.495 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:49.755 { 00:20:49.755 "cntlid": 47, 00:20:49.755 "qid": 0, 00:20:49.755 "state": "enabled", 00:20:49.755 "thread": "nvmf_tgt_poll_group_000", 00:20:49.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:49.755 "listen_address": { 00:20:49.755 "trtype": "TCP", 00:20:49.755 "adrfam": "IPv4", 00:20:49.755 "traddr": "10.0.0.2", 00:20:49.755 "trsvcid": "4420" 00:20:49.755 }, 00:20:49.755 "peer_address": { 00:20:49.755 "trtype": "TCP", 00:20:49.755 "adrfam": "IPv4", 00:20:49.755 "traddr": "10.0.0.1", 00:20:49.755 "trsvcid": "49952" 00:20:49.755 }, 00:20:49.755 "auth": { 00:20:49.755 "state": "completed", 00:20:49.755 "digest": "sha256", 00:20:49.755 "dhgroup": "ffdhe8192" 00:20:49.755 } 00:20:49.755 } 00:20:49.755 ]' 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:49.755 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:50.014 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:50.014 05:21:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:50.581 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:50.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:50.581 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:50.581 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.581 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.581 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.581 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:20:50.581 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:50.581 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:50.581 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:50.581 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:50.839 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:20:50.839 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:50.840 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:50.840 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:50.840 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:50.840 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:50.840 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.840 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:50.840 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:50.840 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:50.840 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.840 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:50.840 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:51.098 00:20:51.098 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:51.098 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:51.098 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:51.357 { 00:20:51.357 "cntlid": 49, 00:20:51.357 "qid": 0, 00:20:51.357 "state": "enabled", 00:20:51.357 "thread": "nvmf_tgt_poll_group_000", 00:20:51.357 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:51.357 "listen_address": { 00:20:51.357 "trtype": "TCP", 00:20:51.357 "adrfam": "IPv4", 00:20:51.357 "traddr": "10.0.0.2", 00:20:51.357 "trsvcid": "4420" 00:20:51.357 }, 00:20:51.357 "peer_address": { 00:20:51.357 "trtype": "TCP", 00:20:51.357 "adrfam": "IPv4", 00:20:51.357 "traddr": "10.0.0.1", 00:20:51.357 "trsvcid": "49988" 00:20:51.357 }, 00:20:51.357 "auth": { 00:20:51.357 "state": "completed", 00:20:51.357 "digest": "sha384", 00:20:51.357 "dhgroup": "null" 00:20:51.357 } 00:20:51.357 } 00:20:51.357 ]' 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:51.357 05:21:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:51.623 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:51.623 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:52.190 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:52.190 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:52.190 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:52.190 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.190 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.190 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.190 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:52.190 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:52.190 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:52.449 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:20:52.449 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:52.449 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:52.449 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:52.449 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:52.449 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:52.449 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.449 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.449 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.449 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.449 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.449 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.449 05:21:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:52.707 00:20:52.707 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:52.707 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:52.707 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:52.965 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:52.965 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:52.965 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:52.965 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:52.966 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:52.966 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:52.966 { 00:20:52.966 "cntlid": 51, 00:20:52.966 "qid": 0, 00:20:52.966 "state": "enabled", 00:20:52.966 "thread": "nvmf_tgt_poll_group_000", 00:20:52.966 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:52.966 "listen_address": { 00:20:52.966 "trtype": "TCP", 00:20:52.966 "adrfam": "IPv4", 00:20:52.966 "traddr": "10.0.0.2", 00:20:52.966 "trsvcid": "4420" 00:20:52.966 }, 00:20:52.966 "peer_address": { 00:20:52.966 "trtype": "TCP", 00:20:52.966 "adrfam": "IPv4", 00:20:52.966 "traddr": "10.0.0.1", 00:20:52.966 "trsvcid": "50002" 00:20:52.966 }, 00:20:52.966 "auth": { 00:20:52.966 "state": "completed", 00:20:52.966 "digest": "sha384", 00:20:52.966 "dhgroup": "null" 00:20:52.966 } 00:20:52.966 } 00:20:52.966 ]' 00:20:52.966 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:52.966 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:52.966 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:52.966 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:52.966 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:52.966 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:52.966 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:52.966 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:53.224 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:53.224 05:21:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:53.791 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:53.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:53.791 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:53.791 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:53.791 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:53.791 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:53.791 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:53.791 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:53.791 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:54.050 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:20:54.050 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:54.050 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:54.050 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:54.050 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:20:54.050 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:54.050 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.050 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.050 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.050 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.050 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.050 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.050 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:20:54.308 00:20:54.308 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:54.308 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:54.308 05:21:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:54.566 { 00:20:54.566 "cntlid": 53, 00:20:54.566 "qid": 0, 00:20:54.566 "state": "enabled", 00:20:54.566 "thread": "nvmf_tgt_poll_group_000", 00:20:54.566 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:54.566 "listen_address": { 00:20:54.566 "trtype": "TCP", 00:20:54.566 "adrfam": "IPv4", 00:20:54.566 "traddr": "10.0.0.2", 00:20:54.566 "trsvcid": "4420" 00:20:54.566 }, 00:20:54.566 "peer_address": { 00:20:54.566 "trtype": "TCP", 00:20:54.566 "adrfam": "IPv4", 00:20:54.566 "traddr": "10.0.0.1", 00:20:54.566 "trsvcid": "43718" 00:20:54.566 }, 00:20:54.566 "auth": { 00:20:54.566 "state": "completed", 00:20:54.566 "digest": "sha384", 00:20:54.566 "dhgroup": "null" 00:20:54.566 } 00:20:54.566 } 00:20:54.566 ]' 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:54.566 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:54.824 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:54.825 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:20:55.392 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:55.392 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:55.392 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:55.392 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.392 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.392 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.392 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:55.392 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:55.392 05:21:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:20:55.651 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:20:55.651 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:55.651 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:55.651 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:20:55.651 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:20:55.651 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:55.651 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:20:55.651 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.651 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.651 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.651 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:20:55.651 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.651 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:20:55.910 00:20:55.910 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:55.910 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:55.910 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:55.910 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:55.910 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:55.910 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:55.910 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:55.910 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:55.910 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:55.910 { 00:20:55.910 "cntlid": 55, 00:20:55.910 "qid": 0, 00:20:55.910 "state": "enabled", 00:20:55.910 "thread": "nvmf_tgt_poll_group_000", 00:20:55.910 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:55.910 "listen_address": { 00:20:55.910 "trtype": "TCP", 00:20:55.910 "adrfam": "IPv4", 00:20:55.910 "traddr": "10.0.0.2", 00:20:55.910 "trsvcid": "4420" 00:20:55.910 }, 00:20:55.910 "peer_address": { 00:20:55.910 "trtype": "TCP", 00:20:55.910 "adrfam": "IPv4", 00:20:55.910 "traddr": "10.0.0.1", 00:20:55.910 "trsvcid": "43748" 00:20:55.910 }, 00:20:55.910 "auth": { 00:20:55.910 "state": "completed", 00:20:55.910 "digest": "sha384", 00:20:55.910 "dhgroup": "null" 00:20:55.910 } 00:20:55.910 } 00:20:55.910 ]' 00:20:55.910 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:56.169 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:56.169 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:56.169 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:20:56.169 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:56.169 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:56.169 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:56.169 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:56.428 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:56.428 05:21:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:56.995 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:56.995 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.254 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:20:57.254 00:20:57.254 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:57.254 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:57.254 05:21:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:57.512 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:57.512 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:57.512 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:57.512 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:57.512 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.512 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:57.512 { 00:20:57.512 "cntlid": 57, 00:20:57.512 "qid": 0, 00:20:57.512 "state": "enabled", 00:20:57.512 "thread": "nvmf_tgt_poll_group_000", 00:20:57.512 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:57.512 "listen_address": { 00:20:57.512 "trtype": "TCP", 00:20:57.512 "adrfam": "IPv4", 00:20:57.512 "traddr": "10.0.0.2", 00:20:57.512 "trsvcid": "4420" 00:20:57.512 }, 00:20:57.512 "peer_address": { 00:20:57.512 "trtype": "TCP", 00:20:57.512 "adrfam": "IPv4", 00:20:57.512 "traddr": "10.0.0.1", 00:20:57.512 "trsvcid": "43788" 00:20:57.512 }, 00:20:57.512 "auth": { 00:20:57.512 "state": "completed", 00:20:57.512 "digest": "sha384", 00:20:57.512 "dhgroup": "ffdhe2048" 00:20:57.512 } 00:20:57.512 } 00:20:57.512 ]' 00:20:57.512 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:57.512 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:57.513 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:57.772 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:57.772 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:57.772 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:57.772 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:57.772 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:58.031 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:58.031 05:21:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:58.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.598 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:58.599 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:58.599 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:58.599 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.599 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.599 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:20:58.857 00:20:58.857 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:20:58.857 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:20:58.857 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:20:59.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:20:59.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:20:59.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:20:59.115 { 00:20:59.115 "cntlid": 59, 00:20:59.115 "qid": 0, 00:20:59.115 "state": "enabled", 00:20:59.115 "thread": "nvmf_tgt_poll_group_000", 00:20:59.115 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:20:59.115 "listen_address": { 00:20:59.115 "trtype": "TCP", 00:20:59.115 "adrfam": "IPv4", 00:20:59.115 "traddr": "10.0.0.2", 00:20:59.115 "trsvcid": "4420" 00:20:59.115 }, 00:20:59.115 "peer_address": { 00:20:59.115 "trtype": "TCP", 00:20:59.115 "adrfam": "IPv4", 00:20:59.115 "traddr": "10.0.0.1", 00:20:59.115 "trsvcid": "43818" 00:20:59.115 }, 00:20:59.115 "auth": { 00:20:59.115 "state": "completed", 00:20:59.115 "digest": "sha384", 00:20:59.115 "dhgroup": "ffdhe2048" 00:20:59.115 } 00:20:59.115 } 00:20:59.115 ]' 00:20:59.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:20:59.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:20:59.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:20:59.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:20:59.115 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:20:59.374 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:20:59.374 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:20:59.374 05:21:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:20:59.374 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:59.374 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:20:59.942 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:20:59.942 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:20:59.942 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:20:59.942 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:59.942 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:20:59.942 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:59.942 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:20:59.942 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:20:59.942 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:00.201 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:21:00.201 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:00.201 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:00.201 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:00.201 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:00.201 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:00.201 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.201 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.201 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.201 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.201 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.201 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.201 05:21:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:00.460 00:21:00.460 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:00.460 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:00.460 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:00.718 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:00.718 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:00.718 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:00.718 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:00.718 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:00.718 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:00.718 { 00:21:00.718 "cntlid": 61, 00:21:00.718 "qid": 0, 00:21:00.718 "state": "enabled", 00:21:00.718 "thread": "nvmf_tgt_poll_group_000", 00:21:00.718 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:00.718 "listen_address": { 00:21:00.718 "trtype": "TCP", 00:21:00.718 "adrfam": "IPv4", 00:21:00.718 "traddr": "10.0.0.2", 00:21:00.719 "trsvcid": "4420" 00:21:00.719 }, 00:21:00.719 "peer_address": { 00:21:00.719 "trtype": "TCP", 00:21:00.719 "adrfam": "IPv4", 00:21:00.719 "traddr": "10.0.0.1", 00:21:00.719 "trsvcid": "43838" 00:21:00.719 }, 00:21:00.719 "auth": { 00:21:00.719 "state": "completed", 00:21:00.719 "digest": "sha384", 00:21:00.719 "dhgroup": "ffdhe2048" 00:21:00.719 } 00:21:00.719 } 00:21:00.719 ]' 00:21:00.719 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:00.719 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:00.719 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:00.719 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:00.719 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:00.977 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:00.977 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:00.977 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:00.977 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:00.977 05:21:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:01.545 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:01.545 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:01.545 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:01.545 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.545 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.545 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.545 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:01.545 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:01.545 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:21:01.803 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:21:01.803 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:01.803 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:01.803 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:01.803 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:01.803 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:01.803 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:01.803 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:01.803 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:01.803 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:01.803 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:01.803 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:01.803 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:02.062 00:21:02.062 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:02.062 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:02.062 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:02.320 { 00:21:02.320 "cntlid": 63, 00:21:02.320 "qid": 0, 00:21:02.320 "state": "enabled", 00:21:02.320 "thread": "nvmf_tgt_poll_group_000", 00:21:02.320 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:02.320 "listen_address": { 00:21:02.320 "trtype": "TCP", 00:21:02.320 "adrfam": "IPv4", 00:21:02.320 "traddr": "10.0.0.2", 00:21:02.320 "trsvcid": "4420" 00:21:02.320 }, 00:21:02.320 "peer_address": { 00:21:02.320 "trtype": "TCP", 00:21:02.320 "adrfam": "IPv4", 00:21:02.320 "traddr": "10.0.0.1", 00:21:02.320 "trsvcid": "43878" 00:21:02.320 }, 00:21:02.320 "auth": { 00:21:02.320 "state": "completed", 00:21:02.320 "digest": "sha384", 00:21:02.320 "dhgroup": "ffdhe2048" 00:21:02.320 } 00:21:02.320 } 00:21:02.320 ]' 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:02.320 05:21:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:02.579 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:02.579 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:03.147 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:03.147 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:03.147 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:03.147 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.147 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.147 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.147 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:03.147 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:03.147 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:03.147 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:03.406 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:21:03.406 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:03.406 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:03.406 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:03.406 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:03.406 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:03.406 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.406 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.406 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.406 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.406 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.406 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.406 05:21:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:03.665 00:21:03.665 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:03.665 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:03.665 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:03.924 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:03.924 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:03.924 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.924 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:03.924 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.924 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:03.924 { 00:21:03.924 "cntlid": 65, 00:21:03.924 "qid": 0, 00:21:03.924 "state": "enabled", 00:21:03.924 "thread": "nvmf_tgt_poll_group_000", 00:21:03.924 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:03.924 "listen_address": { 00:21:03.924 "trtype": "TCP", 00:21:03.924 "adrfam": "IPv4", 00:21:03.924 "traddr": "10.0.0.2", 00:21:03.924 "trsvcid": "4420" 00:21:03.924 }, 00:21:03.924 "peer_address": { 00:21:03.924 "trtype": "TCP", 00:21:03.924 "adrfam": "IPv4", 00:21:03.924 "traddr": "10.0.0.1", 00:21:03.924 "trsvcid": "52340" 00:21:03.924 }, 00:21:03.924 "auth": { 00:21:03.924 "state": "completed", 00:21:03.924 "digest": "sha384", 00:21:03.924 "dhgroup": "ffdhe3072" 00:21:03.924 } 00:21:03.924 } 00:21:03.924 ]' 00:21:03.924 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:03.924 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:03.924 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:03.924 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:03.924 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:03.924 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:03.924 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:03.925 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:04.183 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:04.183 05:21:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:04.751 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:04.751 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:04.751 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:04.751 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:04.751 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:04.751 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:04.751 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:04.751 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:04.751 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:05.011 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:21:05.011 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:05.011 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:05.011 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:05.011 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:05.011 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:05.011 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.011 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.011 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.011 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.011 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.011 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.011 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:05.269 00:21:05.269 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:05.269 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:05.269 05:21:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:05.528 { 00:21:05.528 "cntlid": 67, 00:21:05.528 "qid": 0, 00:21:05.528 "state": "enabled", 00:21:05.528 "thread": "nvmf_tgt_poll_group_000", 00:21:05.528 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:05.528 "listen_address": { 00:21:05.528 "trtype": "TCP", 00:21:05.528 "adrfam": "IPv4", 00:21:05.528 "traddr": "10.0.0.2", 00:21:05.528 "trsvcid": "4420" 00:21:05.528 }, 00:21:05.528 "peer_address": { 00:21:05.528 "trtype": "TCP", 00:21:05.528 "adrfam": "IPv4", 00:21:05.528 "traddr": "10.0.0.1", 00:21:05.528 "trsvcid": "52360" 00:21:05.528 }, 00:21:05.528 "auth": { 00:21:05.528 "state": "completed", 00:21:05.528 "digest": "sha384", 00:21:05.528 "dhgroup": "ffdhe3072" 00:21:05.528 } 00:21:05.528 } 00:21:05.528 ]' 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:05.528 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:05.787 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:05.787 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:06.355 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:06.355 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:06.355 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:06.355 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.355 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.355 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.355 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:06.355 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.355 05:21:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:06.613 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:21:06.613 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:06.613 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:06.613 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:06.613 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:06.613 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:06.613 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.613 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.613 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:06.613 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.613 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.613 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.613 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:06.872 00:21:06.872 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:06.872 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:06.872 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:07.131 { 00:21:07.131 "cntlid": 69, 00:21:07.131 "qid": 0, 00:21:07.131 "state": "enabled", 00:21:07.131 "thread": "nvmf_tgt_poll_group_000", 00:21:07.131 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:07.131 "listen_address": { 00:21:07.131 "trtype": "TCP", 00:21:07.131 "adrfam": "IPv4", 00:21:07.131 "traddr": "10.0.0.2", 00:21:07.131 "trsvcid": "4420" 00:21:07.131 }, 00:21:07.131 "peer_address": { 00:21:07.131 "trtype": "TCP", 00:21:07.131 "adrfam": "IPv4", 00:21:07.131 "traddr": "10.0.0.1", 00:21:07.131 "trsvcid": "52380" 00:21:07.131 }, 00:21:07.131 "auth": { 00:21:07.131 "state": "completed", 00:21:07.131 "digest": "sha384", 00:21:07.131 "dhgroup": "ffdhe3072" 00:21:07.131 } 00:21:07.131 } 00:21:07.131 ]' 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:07.131 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:07.408 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:07.408 05:21:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:07.975 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:07.975 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:07.975 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:07.975 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:07.975 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:07.975 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:07.976 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:07.976 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:07.976 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:21:08.234 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:21:08.234 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:08.234 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:08.234 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:08.234 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:08.234 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:08.234 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:08.234 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.234 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.234 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.234 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:08.234 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.234 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:08.494 00:21:08.494 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:08.494 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:08.494 05:21:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:08.494 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:08.494 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:08.494 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:08.494 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:08.753 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:08.753 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:08.753 { 00:21:08.753 "cntlid": 71, 00:21:08.753 "qid": 0, 00:21:08.753 "state": "enabled", 00:21:08.753 "thread": "nvmf_tgt_poll_group_000", 00:21:08.753 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:08.753 "listen_address": { 00:21:08.753 "trtype": "TCP", 00:21:08.753 "adrfam": "IPv4", 00:21:08.753 "traddr": "10.0.0.2", 00:21:08.753 "trsvcid": "4420" 00:21:08.753 }, 00:21:08.753 "peer_address": { 00:21:08.753 "trtype": "TCP", 00:21:08.753 "adrfam": "IPv4", 00:21:08.753 "traddr": "10.0.0.1", 00:21:08.753 "trsvcid": "52394" 00:21:08.753 }, 00:21:08.753 "auth": { 00:21:08.753 "state": "completed", 00:21:08.753 "digest": "sha384", 00:21:08.753 "dhgroup": "ffdhe3072" 00:21:08.753 } 00:21:08.753 } 00:21:08.753 ]' 00:21:08.753 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:08.753 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:08.753 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:08.753 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:08.753 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:08.753 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:08.753 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:08.753 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:09.012 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:09.012 05:21:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:09.580 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:09.580 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:09.580 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:09.580 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.580 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.580 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.580 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:09.580 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:09.580 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.580 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:09.580 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:21:09.580 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:09.839 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:09.839 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:09.839 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:09.839 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:09.839 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.839 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:09.839 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:09.839 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:09.839 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.839 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:09.839 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:10.098 00:21:10.098 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:10.098 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:10.098 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:10.098 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:10.098 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:10.098 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:10.098 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:10.098 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:10.098 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:10.098 { 00:21:10.098 "cntlid": 73, 00:21:10.098 "qid": 0, 00:21:10.098 "state": "enabled", 00:21:10.098 "thread": "nvmf_tgt_poll_group_000", 00:21:10.098 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:10.098 "listen_address": { 00:21:10.098 "trtype": "TCP", 00:21:10.098 "adrfam": "IPv4", 00:21:10.098 "traddr": "10.0.0.2", 00:21:10.098 "trsvcid": "4420" 00:21:10.098 }, 00:21:10.098 "peer_address": { 00:21:10.098 "trtype": "TCP", 00:21:10.098 "adrfam": "IPv4", 00:21:10.098 "traddr": "10.0.0.1", 00:21:10.098 "trsvcid": "52414" 00:21:10.098 }, 00:21:10.098 "auth": { 00:21:10.098 "state": "completed", 00:21:10.098 "digest": "sha384", 00:21:10.098 "dhgroup": "ffdhe4096" 00:21:10.098 } 00:21:10.098 } 00:21:10.098 ]' 00:21:10.098 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:10.357 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:10.357 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:10.357 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:10.357 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:10.357 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:10.357 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:10.357 05:21:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:10.617 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:10.617 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:11.185 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.185 05:21:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:11.445 00:21:11.703 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:11.703 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:11.703 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:11.703 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:11.703 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:11.703 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:11.703 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:11.703 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:11.703 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:11.703 { 00:21:11.703 "cntlid": 75, 00:21:11.703 "qid": 0, 00:21:11.703 "state": "enabled", 00:21:11.703 "thread": "nvmf_tgt_poll_group_000", 00:21:11.703 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:11.703 "listen_address": { 00:21:11.703 "trtype": "TCP", 00:21:11.703 "adrfam": "IPv4", 00:21:11.703 "traddr": "10.0.0.2", 00:21:11.703 "trsvcid": "4420" 00:21:11.703 }, 00:21:11.703 "peer_address": { 00:21:11.703 "trtype": "TCP", 00:21:11.703 "adrfam": "IPv4", 00:21:11.703 "traddr": "10.0.0.1", 00:21:11.703 "trsvcid": "52448" 00:21:11.703 }, 00:21:11.703 "auth": { 00:21:11.703 "state": "completed", 00:21:11.703 "digest": "sha384", 00:21:11.703 "dhgroup": "ffdhe4096" 00:21:11.703 } 00:21:11.703 } 00:21:11.703 ]' 00:21:11.703 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:11.962 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:11.962 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:11.962 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:11.962 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:11.962 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:11.962 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:11.962 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:12.221 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:12.221 05:21:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:12.789 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:12.789 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:12.789 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:12.789 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.789 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:12.789 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.789 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:12.789 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:12.789 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:12.790 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:21:12.790 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:12.790 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:12.790 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:12.790 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:12.790 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:12.790 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:12.790 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.790 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.049 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.049 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.049 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.049 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:13.308 00:21:13.308 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:13.308 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:13.308 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:13.308 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:13.308 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:13.308 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:13.308 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:13.308 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:13.308 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:13.308 { 00:21:13.308 "cntlid": 77, 00:21:13.308 "qid": 0, 00:21:13.308 "state": "enabled", 00:21:13.308 "thread": "nvmf_tgt_poll_group_000", 00:21:13.308 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:13.308 "listen_address": { 00:21:13.308 "trtype": "TCP", 00:21:13.308 "adrfam": "IPv4", 00:21:13.308 "traddr": "10.0.0.2", 00:21:13.308 "trsvcid": "4420" 00:21:13.308 }, 00:21:13.308 "peer_address": { 00:21:13.308 "trtype": "TCP", 00:21:13.308 "adrfam": "IPv4", 00:21:13.308 "traddr": "10.0.0.1", 00:21:13.308 "trsvcid": "52480" 00:21:13.308 }, 00:21:13.308 "auth": { 00:21:13.308 "state": "completed", 00:21:13.308 "digest": "sha384", 00:21:13.308 "dhgroup": "ffdhe4096" 00:21:13.308 } 00:21:13.308 } 00:21:13.308 ]' 00:21:13.308 05:21:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:13.566 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:13.566 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:13.567 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:13.567 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:13.567 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:13.567 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:13.567 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:13.825 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:13.825 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:14.394 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:14.394 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:14.394 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:14.394 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.394 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.394 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.394 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:14.394 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.394 05:21:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:21:14.394 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:21:14.394 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:14.394 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:14.394 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:14.394 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:14.394 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:14.394 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:14.394 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.394 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.394 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.394 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:14.394 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.394 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:14.653 00:21:14.653 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:14.653 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:14.653 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:14.913 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:14.913 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:14.913 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:14.913 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:14.913 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:14.913 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:14.913 { 00:21:14.913 "cntlid": 79, 00:21:14.913 "qid": 0, 00:21:14.913 "state": "enabled", 00:21:14.913 "thread": "nvmf_tgt_poll_group_000", 00:21:14.913 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:14.913 "listen_address": { 00:21:14.913 "trtype": "TCP", 00:21:14.913 "adrfam": "IPv4", 00:21:14.913 "traddr": "10.0.0.2", 00:21:14.913 "trsvcid": "4420" 00:21:14.913 }, 00:21:14.913 "peer_address": { 00:21:14.913 "trtype": "TCP", 00:21:14.913 "adrfam": "IPv4", 00:21:14.913 "traddr": "10.0.0.1", 00:21:14.913 "trsvcid": "56194" 00:21:14.913 }, 00:21:14.913 "auth": { 00:21:14.913 "state": "completed", 00:21:14.913 "digest": "sha384", 00:21:14.913 "dhgroup": "ffdhe4096" 00:21:14.913 } 00:21:14.913 } 00:21:14.913 ]' 00:21:14.913 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:14.913 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:14.913 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:15.172 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:15.172 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:15.172 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:15.172 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:15.172 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:15.431 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:15.431 05:21:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:15.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:15.999 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:16.567 00:21:16.567 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:16.567 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:16.567 05:21:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:16.567 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:16.567 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:16.567 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:16.567 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:16.567 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:16.567 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:16.567 { 00:21:16.567 "cntlid": 81, 00:21:16.567 "qid": 0, 00:21:16.567 "state": "enabled", 00:21:16.567 "thread": "nvmf_tgt_poll_group_000", 00:21:16.567 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:16.567 "listen_address": { 00:21:16.567 "trtype": "TCP", 00:21:16.567 "adrfam": "IPv4", 00:21:16.567 "traddr": "10.0.0.2", 00:21:16.567 "trsvcid": "4420" 00:21:16.567 }, 00:21:16.567 "peer_address": { 00:21:16.567 "trtype": "TCP", 00:21:16.567 "adrfam": "IPv4", 00:21:16.567 "traddr": "10.0.0.1", 00:21:16.567 "trsvcid": "56228" 00:21:16.567 }, 00:21:16.567 "auth": { 00:21:16.567 "state": "completed", 00:21:16.567 "digest": "sha384", 00:21:16.567 "dhgroup": "ffdhe6144" 00:21:16.567 } 00:21:16.567 } 00:21:16.567 ]' 00:21:16.567 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:16.567 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:16.567 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:16.567 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:16.567 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:16.825 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:16.825 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:16.825 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:16.825 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:16.825 05:21:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:17.402 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:17.402 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:17.402 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:17.402 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.402 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.402 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.402 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:17.402 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:17.402 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:17.660 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:21:17.660 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:17.660 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:17.660 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:17.660 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:17.660 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:17.660 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.660 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:17.660 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:17.660 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:17.660 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.660 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.660 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:17.919 00:21:18.179 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:18.179 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:18.179 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:18.179 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:18.179 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:18.179 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:18.179 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:18.179 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:18.179 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:18.179 { 00:21:18.179 "cntlid": 83, 00:21:18.179 "qid": 0, 00:21:18.179 "state": "enabled", 00:21:18.179 "thread": "nvmf_tgt_poll_group_000", 00:21:18.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:18.179 "listen_address": { 00:21:18.179 "trtype": "TCP", 00:21:18.179 "adrfam": "IPv4", 00:21:18.179 "traddr": "10.0.0.2", 00:21:18.179 "trsvcid": "4420" 00:21:18.179 }, 00:21:18.179 "peer_address": { 00:21:18.179 "trtype": "TCP", 00:21:18.179 "adrfam": "IPv4", 00:21:18.179 "traddr": "10.0.0.1", 00:21:18.179 "trsvcid": "56268" 00:21:18.179 }, 00:21:18.179 "auth": { 00:21:18.179 "state": "completed", 00:21:18.179 "digest": "sha384", 00:21:18.179 "dhgroup": "ffdhe6144" 00:21:18.179 } 00:21:18.179 } 00:21:18.179 ]' 00:21:18.179 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:18.437 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:18.437 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:18.437 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:18.437 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:18.437 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:18.437 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:18.437 05:21:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:18.696 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:18.696 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:19.264 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.264 05:21:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:19.832 00:21:19.832 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:19.832 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:19.832 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:19.832 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:19.832 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:19.832 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:19.832 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:19.832 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:19.832 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:19.832 { 00:21:19.832 "cntlid": 85, 00:21:19.832 "qid": 0, 00:21:19.832 "state": "enabled", 00:21:19.832 "thread": "nvmf_tgt_poll_group_000", 00:21:19.832 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:19.832 "listen_address": { 00:21:19.832 "trtype": "TCP", 00:21:19.832 "adrfam": "IPv4", 00:21:19.832 "traddr": "10.0.0.2", 00:21:19.832 "trsvcid": "4420" 00:21:19.832 }, 00:21:19.832 "peer_address": { 00:21:19.832 "trtype": "TCP", 00:21:19.832 "adrfam": "IPv4", 00:21:19.832 "traddr": "10.0.0.1", 00:21:19.832 "trsvcid": "56282" 00:21:19.832 }, 00:21:19.832 "auth": { 00:21:19.832 "state": "completed", 00:21:19.832 "digest": "sha384", 00:21:19.832 "dhgroup": "ffdhe6144" 00:21:19.832 } 00:21:19.832 } 00:21:19.832 ]' 00:21:19.832 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:20.091 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:20.091 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:20.091 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:20.091 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:20.091 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:20.091 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:20.091 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:20.350 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:20.350 05:21:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:20.917 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:20.917 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:20.917 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:20.917 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:20.917 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:20.917 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:20.917 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:20.917 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:20.917 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:21:21.176 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:21:21.176 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:21.176 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:21.176 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:21.176 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:21.176 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:21.176 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:21.176 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.176 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.176 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.176 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:21.176 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.176 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:21.435 00:21:21.435 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:21.435 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:21.435 05:21:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:21.700 { 00:21:21.700 "cntlid": 87, 00:21:21.700 "qid": 0, 00:21:21.700 "state": "enabled", 00:21:21.700 "thread": "nvmf_tgt_poll_group_000", 00:21:21.700 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:21.700 "listen_address": { 00:21:21.700 "trtype": "TCP", 00:21:21.700 "adrfam": "IPv4", 00:21:21.700 "traddr": "10.0.0.2", 00:21:21.700 "trsvcid": "4420" 00:21:21.700 }, 00:21:21.700 "peer_address": { 00:21:21.700 "trtype": "TCP", 00:21:21.700 "adrfam": "IPv4", 00:21:21.700 "traddr": "10.0.0.1", 00:21:21.700 "trsvcid": "56300" 00:21:21.700 }, 00:21:21.700 "auth": { 00:21:21.700 "state": "completed", 00:21:21.700 "digest": "sha384", 00:21:21.700 "dhgroup": "ffdhe6144" 00:21:21.700 } 00:21:21.700 } 00:21:21.700 ]' 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:21.700 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:21.961 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:21.961 05:21:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:22.529 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:22.529 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:22.529 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:22.529 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.529 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.529 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.529 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:22.529 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:22.529 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:22.529 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:22.788 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:21:22.788 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:22.788 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:22.788 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:22.788 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:22.788 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:22.788 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.788 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:22.788 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:22.788 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:22.788 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.788 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:22.788 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:23.356 00:21:23.356 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:23.356 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:23.356 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:23.356 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:23.356 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:23.356 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.356 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:23.356 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.356 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:23.356 { 00:21:23.356 "cntlid": 89, 00:21:23.356 "qid": 0, 00:21:23.356 "state": "enabled", 00:21:23.356 "thread": "nvmf_tgt_poll_group_000", 00:21:23.356 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:23.356 "listen_address": { 00:21:23.356 "trtype": "TCP", 00:21:23.356 "adrfam": "IPv4", 00:21:23.356 "traddr": "10.0.0.2", 00:21:23.356 "trsvcid": "4420" 00:21:23.356 }, 00:21:23.356 "peer_address": { 00:21:23.356 "trtype": "TCP", 00:21:23.356 "adrfam": "IPv4", 00:21:23.356 "traddr": "10.0.0.1", 00:21:23.356 "trsvcid": "56324" 00:21:23.356 }, 00:21:23.356 "auth": { 00:21:23.356 "state": "completed", 00:21:23.356 "digest": "sha384", 00:21:23.356 "dhgroup": "ffdhe8192" 00:21:23.356 } 00:21:23.356 } 00:21:23.356 ]' 00:21:23.356 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:23.356 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:23.356 05:21:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:23.614 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:23.614 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:23.614 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:23.614 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:23.614 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:23.614 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:23.614 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:24.181 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:24.181 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:24.181 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:24.181 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.181 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.440 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.440 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:24.440 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:24.440 05:21:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:24.440 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:21:24.440 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:24.440 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:24.440 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:24.440 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:24.440 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:24.440 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.440 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:24.440 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:24.440 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.440 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.440 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:24.440 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:25.008 00:21:25.008 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:25.008 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:25.008 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:25.267 { 00:21:25.267 "cntlid": 91, 00:21:25.267 "qid": 0, 00:21:25.267 "state": "enabled", 00:21:25.267 "thread": "nvmf_tgt_poll_group_000", 00:21:25.267 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:25.267 "listen_address": { 00:21:25.267 "trtype": "TCP", 00:21:25.267 "adrfam": "IPv4", 00:21:25.267 "traddr": "10.0.0.2", 00:21:25.267 "trsvcid": "4420" 00:21:25.267 }, 00:21:25.267 "peer_address": { 00:21:25.267 "trtype": "TCP", 00:21:25.267 "adrfam": "IPv4", 00:21:25.267 "traddr": "10.0.0.1", 00:21:25.267 "trsvcid": "52264" 00:21:25.267 }, 00:21:25.267 "auth": { 00:21:25.267 "state": "completed", 00:21:25.267 "digest": "sha384", 00:21:25.267 "dhgroup": "ffdhe8192" 00:21:25.267 } 00:21:25.267 } 00:21:25.267 ]' 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:25.267 05:21:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:25.526 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:25.526 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:26.094 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:26.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:26.094 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:26.094 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.094 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.094 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.094 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:26.094 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:26.094 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:26.353 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:21:26.353 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:26.353 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:26.353 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:26.353 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:26.353 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:26.353 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.353 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.353 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:26.353 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.353 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.353 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.353 05:21:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:26.920 00:21:26.920 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:26.920 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:26.920 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:26.920 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:27.179 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:27.179 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:27.179 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:27.179 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:27.179 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:27.179 { 00:21:27.179 "cntlid": 93, 00:21:27.179 "qid": 0, 00:21:27.179 "state": "enabled", 00:21:27.179 "thread": "nvmf_tgt_poll_group_000", 00:21:27.179 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:27.179 "listen_address": { 00:21:27.179 "trtype": "TCP", 00:21:27.179 "adrfam": "IPv4", 00:21:27.179 "traddr": "10.0.0.2", 00:21:27.179 "trsvcid": "4420" 00:21:27.179 }, 00:21:27.179 "peer_address": { 00:21:27.179 "trtype": "TCP", 00:21:27.179 "adrfam": "IPv4", 00:21:27.179 "traddr": "10.0.0.1", 00:21:27.179 "trsvcid": "52290" 00:21:27.179 }, 00:21:27.179 "auth": { 00:21:27.179 "state": "completed", 00:21:27.179 "digest": "sha384", 00:21:27.179 "dhgroup": "ffdhe8192" 00:21:27.179 } 00:21:27.179 } 00:21:27.179 ]' 00:21:27.179 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:27.179 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:27.179 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:27.179 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:27.179 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:27.179 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:27.179 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:27.179 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:27.438 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:27.438 05:21:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:28.006 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.006 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.266 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.266 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:28.266 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.266 05:21:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:28.525 00:21:28.525 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:28.525 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:28.525 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:28.784 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:28.784 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:28.784 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:28.784 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:28.784 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:28.784 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:28.784 { 00:21:28.784 "cntlid": 95, 00:21:28.784 "qid": 0, 00:21:28.784 "state": "enabled", 00:21:28.784 "thread": "nvmf_tgt_poll_group_000", 00:21:28.784 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:28.784 "listen_address": { 00:21:28.784 "trtype": "TCP", 00:21:28.784 "adrfam": "IPv4", 00:21:28.784 "traddr": "10.0.0.2", 00:21:28.784 "trsvcid": "4420" 00:21:28.784 }, 00:21:28.784 "peer_address": { 00:21:28.784 "trtype": "TCP", 00:21:28.784 "adrfam": "IPv4", 00:21:28.784 "traddr": "10.0.0.1", 00:21:28.784 "trsvcid": "52324" 00:21:28.784 }, 00:21:28.784 "auth": { 00:21:28.784 "state": "completed", 00:21:28.784 "digest": "sha384", 00:21:28.784 "dhgroup": "ffdhe8192" 00:21:28.784 } 00:21:28.784 } 00:21:28.784 ]' 00:21:28.784 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:28.784 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:21:28.784 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:28.784 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:21:28.784 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:29.043 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:29.043 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:29.043 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:29.043 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:29.043 05:21:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:29.611 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:29.611 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:29.611 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:29.611 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.611 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.611 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.611 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:21:29.611 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:29.611 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:29.611 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:29.611 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:29.870 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:21:29.870 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:29.870 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:29.870 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:29.870 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:29.870 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:29.870 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.870 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:29.870 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:29.870 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:29.870 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.870 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:29.870 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:30.129 00:21:30.129 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:30.129 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:30.129 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:30.387 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:30.388 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:30.388 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:30.388 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:30.388 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:30.388 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:30.388 { 00:21:30.388 "cntlid": 97, 00:21:30.388 "qid": 0, 00:21:30.388 "state": "enabled", 00:21:30.388 "thread": "nvmf_tgt_poll_group_000", 00:21:30.388 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:30.388 "listen_address": { 00:21:30.388 "trtype": "TCP", 00:21:30.388 "adrfam": "IPv4", 00:21:30.388 "traddr": "10.0.0.2", 00:21:30.388 "trsvcid": "4420" 00:21:30.388 }, 00:21:30.388 "peer_address": { 00:21:30.388 "trtype": "TCP", 00:21:30.388 "adrfam": "IPv4", 00:21:30.388 "traddr": "10.0.0.1", 00:21:30.388 "trsvcid": "52356" 00:21:30.388 }, 00:21:30.388 "auth": { 00:21:30.388 "state": "completed", 00:21:30.388 "digest": "sha512", 00:21:30.388 "dhgroup": "null" 00:21:30.388 } 00:21:30.388 } 00:21:30.388 ]' 00:21:30.388 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:30.388 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:30.388 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:30.388 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:30.388 05:21:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:30.388 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:30.388 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:30.388 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:30.646 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:30.646 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:31.215 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:31.215 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:31.215 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:31.215 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.215 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.215 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.215 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:31.215 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:31.215 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:31.474 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:21:31.474 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:31.474 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:31.474 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:31.474 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:31.474 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:31.474 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.474 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.474 05:21:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.474 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.474 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.474 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.474 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:31.733 00:21:31.733 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:31.733 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:31.733 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:31.992 { 00:21:31.992 "cntlid": 99, 00:21:31.992 "qid": 0, 00:21:31.992 "state": "enabled", 00:21:31.992 "thread": "nvmf_tgt_poll_group_000", 00:21:31.992 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:31.992 "listen_address": { 00:21:31.992 "trtype": "TCP", 00:21:31.992 "adrfam": "IPv4", 00:21:31.992 "traddr": "10.0.0.2", 00:21:31.992 "trsvcid": "4420" 00:21:31.992 }, 00:21:31.992 "peer_address": { 00:21:31.992 "trtype": "TCP", 00:21:31.992 "adrfam": "IPv4", 00:21:31.992 "traddr": "10.0.0.1", 00:21:31.992 "trsvcid": "52384" 00:21:31.992 }, 00:21:31.992 "auth": { 00:21:31.992 "state": "completed", 00:21:31.992 "digest": "sha512", 00:21:31.992 "dhgroup": "null" 00:21:31.992 } 00:21:31.992 } 00:21:31.992 ]' 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:31.992 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:32.252 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:32.252 05:21:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:32.820 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:32.820 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:32.820 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:32.820 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:32.820 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:32.820 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:32.820 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:32.820 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:32.820 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:33.080 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:21:33.080 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:33.080 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:33.080 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:33.080 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:33.080 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:33.080 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.080 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.080 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.080 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.080 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.080 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.080 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:33.340 00:21:33.340 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:33.340 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:33.340 05:21:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:33.599 { 00:21:33.599 "cntlid": 101, 00:21:33.599 "qid": 0, 00:21:33.599 "state": "enabled", 00:21:33.599 "thread": "nvmf_tgt_poll_group_000", 00:21:33.599 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:33.599 "listen_address": { 00:21:33.599 "trtype": "TCP", 00:21:33.599 "adrfam": "IPv4", 00:21:33.599 "traddr": "10.0.0.2", 00:21:33.599 "trsvcid": "4420" 00:21:33.599 }, 00:21:33.599 "peer_address": { 00:21:33.599 "trtype": "TCP", 00:21:33.599 "adrfam": "IPv4", 00:21:33.599 "traddr": "10.0.0.1", 00:21:33.599 "trsvcid": "52404" 00:21:33.599 }, 00:21:33.599 "auth": { 00:21:33.599 "state": "completed", 00:21:33.599 "digest": "sha512", 00:21:33.599 "dhgroup": "null" 00:21:33.599 } 00:21:33.599 } 00:21:33.599 ]' 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:33.599 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:33.858 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:33.858 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:34.427 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:34.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:34.427 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:34.427 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.427 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.427 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.427 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:34.427 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.427 05:21:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:21:34.686 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:21:34.686 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:34.686 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:34.686 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:21:34.686 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:34.686 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:34.686 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:34.686 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.686 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.686 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.686 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:34.686 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.686 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:34.946 00:21:34.946 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:34.946 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:34.946 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:34.946 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:34.946 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:34.946 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.946 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:34.946 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.946 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:34.946 { 00:21:34.946 "cntlid": 103, 00:21:34.946 "qid": 0, 00:21:34.946 "state": "enabled", 00:21:34.946 "thread": "nvmf_tgt_poll_group_000", 00:21:34.946 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:34.946 "listen_address": { 00:21:34.946 "trtype": "TCP", 00:21:34.946 "adrfam": "IPv4", 00:21:34.946 "traddr": "10.0.0.2", 00:21:34.946 "trsvcid": "4420" 00:21:34.946 }, 00:21:34.946 "peer_address": { 00:21:34.946 "trtype": "TCP", 00:21:34.946 "adrfam": "IPv4", 00:21:34.946 "traddr": "10.0.0.1", 00:21:34.946 "trsvcid": "35032" 00:21:34.946 }, 00:21:34.946 "auth": { 00:21:34.946 "state": "completed", 00:21:34.946 "digest": "sha512", 00:21:34.946 "dhgroup": "null" 00:21:34.946 } 00:21:34.946 } 00:21:34.946 ]' 00:21:34.946 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:35.206 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:35.206 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:35.206 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:21:35.206 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:35.206 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:35.206 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:35.206 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:35.466 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:35.466 05:21:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:36.035 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.035 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:36.295 00:21:36.295 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:36.295 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:36.295 05:21:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:36.554 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:36.554 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:36.554 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.554 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:36.554 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.554 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:36.554 { 00:21:36.554 "cntlid": 105, 00:21:36.554 "qid": 0, 00:21:36.554 "state": "enabled", 00:21:36.554 "thread": "nvmf_tgt_poll_group_000", 00:21:36.554 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:36.554 "listen_address": { 00:21:36.554 "trtype": "TCP", 00:21:36.554 "adrfam": "IPv4", 00:21:36.554 "traddr": "10.0.0.2", 00:21:36.554 "trsvcid": "4420" 00:21:36.554 }, 00:21:36.554 "peer_address": { 00:21:36.554 "trtype": "TCP", 00:21:36.554 "adrfam": "IPv4", 00:21:36.554 "traddr": "10.0.0.1", 00:21:36.554 "trsvcid": "35062" 00:21:36.554 }, 00:21:36.554 "auth": { 00:21:36.554 "state": "completed", 00:21:36.554 "digest": "sha512", 00:21:36.554 "dhgroup": "ffdhe2048" 00:21:36.554 } 00:21:36.554 } 00:21:36.554 ]' 00:21:36.554 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:36.554 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:36.554 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:36.813 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:36.813 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:36.813 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:36.813 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:36.813 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:37.073 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:37.073 05:21:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:37.641 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.641 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:37.900 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.900 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.900 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.900 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:37.900 00:21:37.900 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:37.900 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:37.900 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:38.160 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:38.160 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:38.160 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:38.160 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:38.160 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:38.160 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:38.160 { 00:21:38.160 "cntlid": 107, 00:21:38.160 "qid": 0, 00:21:38.160 "state": "enabled", 00:21:38.160 "thread": "nvmf_tgt_poll_group_000", 00:21:38.160 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:38.160 "listen_address": { 00:21:38.160 "trtype": "TCP", 00:21:38.160 "adrfam": "IPv4", 00:21:38.160 "traddr": "10.0.0.2", 00:21:38.160 "trsvcid": "4420" 00:21:38.160 }, 00:21:38.160 "peer_address": { 00:21:38.160 "trtype": "TCP", 00:21:38.160 "adrfam": "IPv4", 00:21:38.160 "traddr": "10.0.0.1", 00:21:38.160 "trsvcid": "35086" 00:21:38.160 }, 00:21:38.160 "auth": { 00:21:38.160 "state": "completed", 00:21:38.160 "digest": "sha512", 00:21:38.160 "dhgroup": "ffdhe2048" 00:21:38.160 } 00:21:38.160 } 00:21:38.160 ]' 00:21:38.160 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:38.160 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:38.160 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:38.419 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:38.419 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:38.419 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:38.419 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:38.419 05:21:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:38.679 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:38.679 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:39.246 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.246 05:21:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:39.505 00:21:39.505 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:39.505 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:39.505 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:39.764 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:39.764 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:39.764 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:39.764 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:39.764 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:39.764 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:39.764 { 00:21:39.764 "cntlid": 109, 00:21:39.764 "qid": 0, 00:21:39.764 "state": "enabled", 00:21:39.764 "thread": "nvmf_tgt_poll_group_000", 00:21:39.764 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:39.764 "listen_address": { 00:21:39.764 "trtype": "TCP", 00:21:39.764 "adrfam": "IPv4", 00:21:39.764 "traddr": "10.0.0.2", 00:21:39.764 "trsvcid": "4420" 00:21:39.764 }, 00:21:39.764 "peer_address": { 00:21:39.764 "trtype": "TCP", 00:21:39.764 "adrfam": "IPv4", 00:21:39.764 "traddr": "10.0.0.1", 00:21:39.764 "trsvcid": "35112" 00:21:39.764 }, 00:21:39.764 "auth": { 00:21:39.764 "state": "completed", 00:21:39.764 "digest": "sha512", 00:21:39.764 "dhgroup": "ffdhe2048" 00:21:39.764 } 00:21:39.764 } 00:21:39.764 ]' 00:21:39.764 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:39.764 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:39.764 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:40.023 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:40.023 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:40.024 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:40.024 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:40.024 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:40.024 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:40.024 05:21:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:40.593 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:40.593 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:40.593 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:40.593 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.593 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.593 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.593 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:40.593 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:40.593 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:21:40.852 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:21:40.852 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:40.852 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:40.852 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:21:40.852 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:40.852 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:40.852 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:40.852 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:40.852 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:40.852 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:40.853 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:40.853 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:40.853 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:41.112 00:21:41.112 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:41.112 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:41.112 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:41.373 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:41.373 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:41.373 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.373 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:41.373 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.373 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:41.373 { 00:21:41.373 "cntlid": 111, 00:21:41.373 "qid": 0, 00:21:41.373 "state": "enabled", 00:21:41.373 "thread": "nvmf_tgt_poll_group_000", 00:21:41.373 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:41.373 "listen_address": { 00:21:41.373 "trtype": "TCP", 00:21:41.373 "adrfam": "IPv4", 00:21:41.373 "traddr": "10.0.0.2", 00:21:41.373 "trsvcid": "4420" 00:21:41.373 }, 00:21:41.373 "peer_address": { 00:21:41.373 "trtype": "TCP", 00:21:41.373 "adrfam": "IPv4", 00:21:41.373 "traddr": "10.0.0.1", 00:21:41.373 "trsvcid": "35136" 00:21:41.373 }, 00:21:41.373 "auth": { 00:21:41.373 "state": "completed", 00:21:41.373 "digest": "sha512", 00:21:41.373 "dhgroup": "ffdhe2048" 00:21:41.373 } 00:21:41.373 } 00:21:41.373 ]' 00:21:41.373 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:41.373 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:41.373 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:41.373 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:21:41.373 05:21:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:41.373 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:41.373 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:41.373 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:41.633 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:41.633 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:42.202 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:42.202 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:42.202 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:42.202 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.202 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.202 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.202 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:42.202 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:42.202 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:42.202 05:21:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:42.462 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:21:42.462 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:42.462 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:42.462 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:42.462 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:42.462 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:42.462 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.462 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.462 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.462 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.462 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.462 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.462 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:42.722 00:21:42.722 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:42.722 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:42.722 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:42.981 { 00:21:42.981 "cntlid": 113, 00:21:42.981 "qid": 0, 00:21:42.981 "state": "enabled", 00:21:42.981 "thread": "nvmf_tgt_poll_group_000", 00:21:42.981 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:42.981 "listen_address": { 00:21:42.981 "trtype": "TCP", 00:21:42.981 "adrfam": "IPv4", 00:21:42.981 "traddr": "10.0.0.2", 00:21:42.981 "trsvcid": "4420" 00:21:42.981 }, 00:21:42.981 "peer_address": { 00:21:42.981 "trtype": "TCP", 00:21:42.981 "adrfam": "IPv4", 00:21:42.981 "traddr": "10.0.0.1", 00:21:42.981 "trsvcid": "35158" 00:21:42.981 }, 00:21:42.981 "auth": { 00:21:42.981 "state": "completed", 00:21:42.981 "digest": "sha512", 00:21:42.981 "dhgroup": "ffdhe3072" 00:21:42.981 } 00:21:42.981 } 00:21:42.981 ]' 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:42.981 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:43.240 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:43.240 05:21:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:43.809 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:43.809 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:43.809 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:43.809 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:43.809 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:43.809 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:43.809 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:43.809 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:43.809 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:44.069 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:21:44.069 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:44.069 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:44.069 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:44.069 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:44.069 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:44.069 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.069 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.069 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.069 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.069 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.069 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.069 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:44.328 00:21:44.328 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:44.328 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:44.328 05:21:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:44.588 { 00:21:44.588 "cntlid": 115, 00:21:44.588 "qid": 0, 00:21:44.588 "state": "enabled", 00:21:44.588 "thread": "nvmf_tgt_poll_group_000", 00:21:44.588 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:44.588 "listen_address": { 00:21:44.588 "trtype": "TCP", 00:21:44.588 "adrfam": "IPv4", 00:21:44.588 "traddr": "10.0.0.2", 00:21:44.588 "trsvcid": "4420" 00:21:44.588 }, 00:21:44.588 "peer_address": { 00:21:44.588 "trtype": "TCP", 00:21:44.588 "adrfam": "IPv4", 00:21:44.588 "traddr": "10.0.0.1", 00:21:44.588 "trsvcid": "56972" 00:21:44.588 }, 00:21:44.588 "auth": { 00:21:44.588 "state": "completed", 00:21:44.588 "digest": "sha512", 00:21:44.588 "dhgroup": "ffdhe3072" 00:21:44.588 } 00:21:44.588 } 00:21:44.588 ]' 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:44.588 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:44.848 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:44.848 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:45.418 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:45.418 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:45.418 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:45.418 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.418 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.418 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.418 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:45.418 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:45.418 05:21:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:45.678 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:21:45.678 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:45.678 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:45.678 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:45.678 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:45.678 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:45.678 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.678 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.678 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.678 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.678 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.678 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.678 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:45.938 00:21:45.938 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:45.938 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:45.938 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:45.938 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:45.938 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:45.938 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.938 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:45.938 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.938 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:45.938 { 00:21:45.938 "cntlid": 117, 00:21:45.938 "qid": 0, 00:21:45.938 "state": "enabled", 00:21:45.938 "thread": "nvmf_tgt_poll_group_000", 00:21:45.938 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:45.938 "listen_address": { 00:21:45.938 "trtype": "TCP", 00:21:45.938 "adrfam": "IPv4", 00:21:45.938 "traddr": "10.0.0.2", 00:21:45.938 "trsvcid": "4420" 00:21:45.938 }, 00:21:45.938 "peer_address": { 00:21:45.938 "trtype": "TCP", 00:21:45.938 "adrfam": "IPv4", 00:21:45.938 "traddr": "10.0.0.1", 00:21:45.938 "trsvcid": "56992" 00:21:45.938 }, 00:21:45.938 "auth": { 00:21:45.938 "state": "completed", 00:21:45.938 "digest": "sha512", 00:21:45.938 "dhgroup": "ffdhe3072" 00:21:45.938 } 00:21:45.938 } 00:21:45.938 ]' 00:21:45.938 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:46.197 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:46.197 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:46.197 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:46.197 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:46.197 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:46.197 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:46.197 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:46.455 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:46.455 05:21:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:47.022 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:47.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:47.022 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:47.022 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.022 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.022 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.022 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:47.022 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:47.022 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:21:47.280 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:21:47.281 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:47.281 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:47.281 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:21:47.281 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:47.281 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:47.281 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:47.281 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.281 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.281 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.281 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:47.281 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.281 05:22:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:47.539 00:21:47.540 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:47.540 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:47.540 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:47.540 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:47.540 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:47.540 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:47.540 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:47.799 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:47.799 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:47.799 { 00:21:47.799 "cntlid": 119, 00:21:47.799 "qid": 0, 00:21:47.799 "state": "enabled", 00:21:47.799 "thread": "nvmf_tgt_poll_group_000", 00:21:47.799 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:47.799 "listen_address": { 00:21:47.799 "trtype": "TCP", 00:21:47.799 "adrfam": "IPv4", 00:21:47.799 "traddr": "10.0.0.2", 00:21:47.799 "trsvcid": "4420" 00:21:47.799 }, 00:21:47.799 "peer_address": { 00:21:47.799 "trtype": "TCP", 00:21:47.799 "adrfam": "IPv4", 00:21:47.799 "traddr": "10.0.0.1", 00:21:47.799 "trsvcid": "57020" 00:21:47.799 }, 00:21:47.799 "auth": { 00:21:47.799 "state": "completed", 00:21:47.799 "digest": "sha512", 00:21:47.799 "dhgroup": "ffdhe3072" 00:21:47.799 } 00:21:47.799 } 00:21:47.799 ]' 00:21:47.799 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:47.799 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:47.799 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:47.799 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:21:47.799 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:47.799 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:47.799 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:47.799 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:48.065 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:48.065 05:22:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:48.633 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:48.633 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:48.892 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:48.892 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.892 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:48.892 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:49.150 00:21:49.151 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:49.151 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:49.151 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:49.151 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:49.151 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:49.151 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.151 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:49.151 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.151 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:49.151 { 00:21:49.151 "cntlid": 121, 00:21:49.151 "qid": 0, 00:21:49.151 "state": "enabled", 00:21:49.151 "thread": "nvmf_tgt_poll_group_000", 00:21:49.151 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:49.151 "listen_address": { 00:21:49.151 "trtype": "TCP", 00:21:49.151 "adrfam": "IPv4", 00:21:49.151 "traddr": "10.0.0.2", 00:21:49.151 "trsvcid": "4420" 00:21:49.151 }, 00:21:49.151 "peer_address": { 00:21:49.151 "trtype": "TCP", 00:21:49.151 "adrfam": "IPv4", 00:21:49.151 "traddr": "10.0.0.1", 00:21:49.151 "trsvcid": "57038" 00:21:49.151 }, 00:21:49.151 "auth": { 00:21:49.151 "state": "completed", 00:21:49.151 "digest": "sha512", 00:21:49.151 "dhgroup": "ffdhe4096" 00:21:49.151 } 00:21:49.151 } 00:21:49.151 ]' 00:21:49.151 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:49.409 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:49.409 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:49.409 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:49.409 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:49.409 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:49.409 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:49.409 05:22:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:49.668 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:49.668 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:50.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.237 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.496 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.496 05:22:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:50.755 00:21:50.755 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:50.755 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:50.755 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:50.755 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:50.755 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:50.755 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:50.755 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:50.755 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.755 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:50.755 { 00:21:50.755 "cntlid": 123, 00:21:50.755 "qid": 0, 00:21:50.755 "state": "enabled", 00:21:50.755 "thread": "nvmf_tgt_poll_group_000", 00:21:50.755 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:50.755 "listen_address": { 00:21:50.755 "trtype": "TCP", 00:21:50.755 "adrfam": "IPv4", 00:21:50.755 "traddr": "10.0.0.2", 00:21:50.755 "trsvcid": "4420" 00:21:50.755 }, 00:21:50.755 "peer_address": { 00:21:50.755 "trtype": "TCP", 00:21:50.755 "adrfam": "IPv4", 00:21:50.755 "traddr": "10.0.0.1", 00:21:50.755 "trsvcid": "57068" 00:21:50.755 }, 00:21:50.755 "auth": { 00:21:50.755 "state": "completed", 00:21:50.755 "digest": "sha512", 00:21:50.755 "dhgroup": "ffdhe4096" 00:21:50.755 } 00:21:50.755 } 00:21:50.755 ]' 00:21:50.755 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:51.014 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:51.014 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:51.014 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:51.014 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:51.014 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:51.014 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:51.014 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:51.273 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:51.273 05:22:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:51.841 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:51.841 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:52.099 00:21:52.099 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:52.099 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:52.099 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:52.359 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:52.359 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:52.359 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:52.359 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:52.359 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:52.359 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:52.359 { 00:21:52.359 "cntlid": 125, 00:21:52.359 "qid": 0, 00:21:52.359 "state": "enabled", 00:21:52.359 "thread": "nvmf_tgt_poll_group_000", 00:21:52.359 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:52.359 "listen_address": { 00:21:52.359 "trtype": "TCP", 00:21:52.359 "adrfam": "IPv4", 00:21:52.359 "traddr": "10.0.0.2", 00:21:52.359 "trsvcid": "4420" 00:21:52.359 }, 00:21:52.359 "peer_address": { 00:21:52.359 "trtype": "TCP", 00:21:52.359 "adrfam": "IPv4", 00:21:52.359 "traddr": "10.0.0.1", 00:21:52.359 "trsvcid": "57102" 00:21:52.359 }, 00:21:52.359 "auth": { 00:21:52.359 "state": "completed", 00:21:52.359 "digest": "sha512", 00:21:52.359 "dhgroup": "ffdhe4096" 00:21:52.359 } 00:21:52.359 } 00:21:52.359 ]' 00:21:52.359 05:22:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:52.359 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:52.359 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:52.619 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:52.619 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:52.619 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:52.619 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:52.619 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:52.619 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:52.619 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:53.187 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:53.187 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:53.187 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:53.187 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.187 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.187 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.187 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:53.187 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.446 05:22:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:21:53.446 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:21:53.446 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:53.446 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:53.446 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:21:53.446 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:53.446 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:53.446 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:53.446 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.446 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.446 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.446 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:53.446 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.446 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:53.705 00:21:53.705 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:53.705 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:53.705 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:53.964 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:53.964 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:53.964 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:53.964 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:53.964 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:53.964 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:53.964 { 00:21:53.964 "cntlid": 127, 00:21:53.964 "qid": 0, 00:21:53.964 "state": "enabled", 00:21:53.964 "thread": "nvmf_tgt_poll_group_000", 00:21:53.964 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:53.964 "listen_address": { 00:21:53.964 "trtype": "TCP", 00:21:53.964 "adrfam": "IPv4", 00:21:53.964 "traddr": "10.0.0.2", 00:21:53.964 "trsvcid": "4420" 00:21:53.964 }, 00:21:53.964 "peer_address": { 00:21:53.964 "trtype": "TCP", 00:21:53.964 "adrfam": "IPv4", 00:21:53.964 "traddr": "10.0.0.1", 00:21:53.964 "trsvcid": "58244" 00:21:53.964 }, 00:21:53.964 "auth": { 00:21:53.964 "state": "completed", 00:21:53.964 "digest": "sha512", 00:21:53.964 "dhgroup": "ffdhe4096" 00:21:53.964 } 00:21:53.964 } 00:21:53.964 ]' 00:21:53.964 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:53.964 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:53.964 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:53.964 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:21:53.964 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:54.222 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:54.222 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:54.222 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:54.222 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:54.222 05:22:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:21:54.791 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:54.791 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:54.791 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:54.791 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:54.791 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:54.791 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:54.791 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:21:54.791 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:54.791 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:54.791 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:55.050 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:21:55.050 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:55.050 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:55.050 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:55.050 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:21:55.050 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:55.050 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.050 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.050 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.050 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.050 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.050 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.050 05:22:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:21:55.309 00:21:55.568 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:55.568 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:55.568 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:55.568 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:55.568 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:55.568 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:55.568 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:55.568 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:55.568 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:55.568 { 00:21:55.568 "cntlid": 129, 00:21:55.568 "qid": 0, 00:21:55.568 "state": "enabled", 00:21:55.568 "thread": "nvmf_tgt_poll_group_000", 00:21:55.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:55.568 "listen_address": { 00:21:55.568 "trtype": "TCP", 00:21:55.568 "adrfam": "IPv4", 00:21:55.568 "traddr": "10.0.0.2", 00:21:55.568 "trsvcid": "4420" 00:21:55.568 }, 00:21:55.568 "peer_address": { 00:21:55.568 "trtype": "TCP", 00:21:55.568 "adrfam": "IPv4", 00:21:55.568 "traddr": "10.0.0.1", 00:21:55.568 "trsvcid": "58254" 00:21:55.568 }, 00:21:55.568 "auth": { 00:21:55.568 "state": "completed", 00:21:55.568 "digest": "sha512", 00:21:55.568 "dhgroup": "ffdhe6144" 00:21:55.568 } 00:21:55.568 } 00:21:55.568 ]' 00:21:55.568 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:55.827 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:55.827 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:55.827 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:55.827 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:55.827 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:55.827 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:55.827 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:56.086 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:56.086 05:22:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:56.653 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:56.653 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:21:57.221 00:21:57.221 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:57.221 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:57.221 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:57.221 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:57.221 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:57.221 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:57.221 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:57.221 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:57.221 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:57.221 { 00:21:57.221 "cntlid": 131, 00:21:57.221 "qid": 0, 00:21:57.221 "state": "enabled", 00:21:57.221 "thread": "nvmf_tgt_poll_group_000", 00:21:57.221 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:57.221 "listen_address": { 00:21:57.221 "trtype": "TCP", 00:21:57.221 "adrfam": "IPv4", 00:21:57.221 "traddr": "10.0.0.2", 00:21:57.221 "trsvcid": "4420" 00:21:57.221 }, 00:21:57.221 "peer_address": { 00:21:57.221 "trtype": "TCP", 00:21:57.221 "adrfam": "IPv4", 00:21:57.221 "traddr": "10.0.0.1", 00:21:57.221 "trsvcid": "58284" 00:21:57.221 }, 00:21:57.221 "auth": { 00:21:57.221 "state": "completed", 00:21:57.221 "digest": "sha512", 00:21:57.221 "dhgroup": "ffdhe6144" 00:21:57.221 } 00:21:57.221 } 00:21:57.221 ]' 00:21:57.221 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:57.480 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:57.480 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:57.480 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:57.480 05:22:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:57.480 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:57.480 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:57.480 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:57.739 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:57.739 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:58.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.306 05:22:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:21:58.875 00:21:58.875 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:21:58.875 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:21:58.875 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:21:58.875 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:58.875 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:21:58.875 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:58.875 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:58.875 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:58.875 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:21:58.875 { 00:21:58.875 "cntlid": 133, 00:21:58.875 "qid": 0, 00:21:58.875 "state": "enabled", 00:21:58.875 "thread": "nvmf_tgt_poll_group_000", 00:21:58.875 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:21:58.875 "listen_address": { 00:21:58.875 "trtype": "TCP", 00:21:58.875 "adrfam": "IPv4", 00:21:58.875 "traddr": "10.0.0.2", 00:21:58.875 "trsvcid": "4420" 00:21:58.875 }, 00:21:58.875 "peer_address": { 00:21:58.875 "trtype": "TCP", 00:21:58.875 "adrfam": "IPv4", 00:21:58.875 "traddr": "10.0.0.1", 00:21:58.875 "trsvcid": "58304" 00:21:58.875 }, 00:21:58.875 "auth": { 00:21:58.875 "state": "completed", 00:21:58.875 "digest": "sha512", 00:21:58.875 "dhgroup": "ffdhe6144" 00:21:58.875 } 00:21:58.875 } 00:21:58.875 ]' 00:21:58.875 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:21:59.134 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:21:59.134 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:21:59.134 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:21:59.134 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:21:59.134 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:21:59.134 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:21:59.134 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:21:59.393 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:59.393 05:22:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:21:59.960 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:21:59.960 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:59.961 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:21:59.961 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:21:59.961 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:00.529 00:22:00.529 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:00.529 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:00.529 05:22:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:00.529 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:00.529 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:00.529 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:00.529 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:00.529 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:00.529 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:00.529 { 00:22:00.529 "cntlid": 135, 00:22:00.529 "qid": 0, 00:22:00.529 "state": "enabled", 00:22:00.529 "thread": "nvmf_tgt_poll_group_000", 00:22:00.529 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:00.529 "listen_address": { 00:22:00.529 "trtype": "TCP", 00:22:00.529 "adrfam": "IPv4", 00:22:00.529 "traddr": "10.0.0.2", 00:22:00.529 "trsvcid": "4420" 00:22:00.529 }, 00:22:00.529 "peer_address": { 00:22:00.529 "trtype": "TCP", 00:22:00.529 "adrfam": "IPv4", 00:22:00.529 "traddr": "10.0.0.1", 00:22:00.529 "trsvcid": "58318" 00:22:00.529 }, 00:22:00.529 "auth": { 00:22:00.529 "state": "completed", 00:22:00.529 "digest": "sha512", 00:22:00.529 "dhgroup": "ffdhe6144" 00:22:00.529 } 00:22:00.529 } 00:22:00.529 ]' 00:22:00.529 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:00.790 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:00.790 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:00.790 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:22:00.790 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:00.790 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:00.790 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:00.790 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:01.050 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:22:01.050 05:22:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:22:01.617 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:01.617 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:01.617 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:01.617 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.617 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.617 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:01.618 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:02.186 00:22:02.186 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:02.186 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:02.186 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:02.444 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:02.444 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:02.444 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:02.444 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:02.444 05:22:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:02.444 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:02.444 { 00:22:02.444 "cntlid": 137, 00:22:02.444 "qid": 0, 00:22:02.444 "state": "enabled", 00:22:02.444 "thread": "nvmf_tgt_poll_group_000", 00:22:02.444 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:02.444 "listen_address": { 00:22:02.444 "trtype": "TCP", 00:22:02.444 "adrfam": "IPv4", 00:22:02.444 "traddr": "10.0.0.2", 00:22:02.444 "trsvcid": "4420" 00:22:02.444 }, 00:22:02.444 "peer_address": { 00:22:02.444 "trtype": "TCP", 00:22:02.444 "adrfam": "IPv4", 00:22:02.444 "traddr": "10.0.0.1", 00:22:02.444 "trsvcid": "58352" 00:22:02.444 }, 00:22:02.444 "auth": { 00:22:02.444 "state": "completed", 00:22:02.444 "digest": "sha512", 00:22:02.444 "dhgroup": "ffdhe8192" 00:22:02.444 } 00:22:02.444 } 00:22:02.444 ]' 00:22:02.444 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:02.444 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:02.444 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:02.445 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:02.445 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:02.445 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:02.445 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:02.445 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:02.703 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:22:02.703 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:22:03.272 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:03.272 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:03.272 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:03.272 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.272 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.272 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.272 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:03.272 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:03.272 05:22:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:03.531 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:22:03.531 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:03.531 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:03.531 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:03.531 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:22:03.531 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:03.531 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.531 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:03.531 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:03.531 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:03.531 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.531 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:03.531 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:04.099 00:22:04.099 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:04.099 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:04.099 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:04.099 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:04.099 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:04.099 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:04.099 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:04.358 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:04.358 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:04.358 { 00:22:04.358 "cntlid": 139, 00:22:04.358 "qid": 0, 00:22:04.358 "state": "enabled", 00:22:04.358 "thread": "nvmf_tgt_poll_group_000", 00:22:04.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:04.358 "listen_address": { 00:22:04.358 "trtype": "TCP", 00:22:04.358 "adrfam": "IPv4", 00:22:04.358 "traddr": "10.0.0.2", 00:22:04.358 "trsvcid": "4420" 00:22:04.358 }, 00:22:04.358 "peer_address": { 00:22:04.358 "trtype": "TCP", 00:22:04.358 "adrfam": "IPv4", 00:22:04.358 "traddr": "10.0.0.1", 00:22:04.358 "trsvcid": "56292" 00:22:04.358 }, 00:22:04.358 "auth": { 00:22:04.358 "state": "completed", 00:22:04.358 "digest": "sha512", 00:22:04.358 "dhgroup": "ffdhe8192" 00:22:04.358 } 00:22:04.358 } 00:22:04.358 ]' 00:22:04.358 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:04.358 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:04.358 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:04.358 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:04.358 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:04.358 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:04.358 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:04.358 05:22:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:04.617 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:22:04.617 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: --dhchap-ctrl-secret DHHC-1:02:Y2I2NjFhNjMzMDYyMTI3NjU3ODIzMDVlZjU3MzBhZjU4MmVjMzEwMzEwZmJiZjk2kAG9cA==: 00:22:05.186 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:05.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:05.186 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:05.186 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.186 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.186 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.186 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:05.186 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.186 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:05.445 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:22:05.445 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:05.445 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:05.445 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:05.445 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:22:05.445 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:05.445 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.445 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.445 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.445 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.445 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.445 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.445 05:22:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:22:05.704 00:22:05.704 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:05.704 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:05.704 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:05.963 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:05.963 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:05.963 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.963 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:05.963 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.963 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:05.963 { 00:22:05.963 "cntlid": 141, 00:22:05.963 "qid": 0, 00:22:05.963 "state": "enabled", 00:22:05.963 "thread": "nvmf_tgt_poll_group_000", 00:22:05.963 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:05.963 "listen_address": { 00:22:05.963 "trtype": "TCP", 00:22:05.963 "adrfam": "IPv4", 00:22:05.963 "traddr": "10.0.0.2", 00:22:05.963 "trsvcid": "4420" 00:22:05.963 }, 00:22:05.963 "peer_address": { 00:22:05.963 "trtype": "TCP", 00:22:05.963 "adrfam": "IPv4", 00:22:05.963 "traddr": "10.0.0.1", 00:22:05.963 "trsvcid": "56314" 00:22:05.963 }, 00:22:05.963 "auth": { 00:22:05.963 "state": "completed", 00:22:05.963 "digest": "sha512", 00:22:05.963 "dhgroup": "ffdhe8192" 00:22:05.963 } 00:22:05.963 } 00:22:05.963 ]' 00:22:05.963 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:05.963 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:05.963 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:06.222 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:06.222 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:06.222 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:06.222 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:06.222 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:06.481 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:22:06.481 05:22:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:01:NjI0NGY5ZGU3NzI2Nzg1ZjIzZTEzM2NkMGRmN2JmNjLR0uN/: 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:07.049 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.049 05:22:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:07.617 00:22:07.617 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:07.617 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:07.617 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:07.876 { 00:22:07.876 "cntlid": 143, 00:22:07.876 "qid": 0, 00:22:07.876 "state": "enabled", 00:22:07.876 "thread": "nvmf_tgt_poll_group_000", 00:22:07.876 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:07.876 "listen_address": { 00:22:07.876 "trtype": "TCP", 00:22:07.876 "adrfam": "IPv4", 00:22:07.876 "traddr": "10.0.0.2", 00:22:07.876 "trsvcid": "4420" 00:22:07.876 }, 00:22:07.876 "peer_address": { 00:22:07.876 "trtype": "TCP", 00:22:07.876 "adrfam": "IPv4", 00:22:07.876 "traddr": "10.0.0.1", 00:22:07.876 "trsvcid": "56344" 00:22:07.876 }, 00:22:07.876 "auth": { 00:22:07.876 "state": "completed", 00:22:07.876 "digest": "sha512", 00:22:07.876 "dhgroup": "ffdhe8192" 00:22:07.876 } 00:22:07.876 } 00:22:07.876 ]' 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:07.876 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:08.135 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:22:08.135 05:22:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:22:08.703 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:08.703 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:08.703 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:08.703 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.703 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.703 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.703 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:08.703 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:22:08.703 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:22:08.703 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:08.703 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:08.703 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:08.962 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:22:08.962 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:08.962 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:08.962 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:08.962 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:22:08.962 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:08.962 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.962 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:08.962 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:08.962 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:08.962 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.962 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:08.962 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:22:09.530 00:22:09.530 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:09.530 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:09.530 05:22:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:09.530 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:09.530 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:09.530 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:09.530 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:09.530 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:09.530 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:09.530 { 00:22:09.530 "cntlid": 145, 00:22:09.530 "qid": 0, 00:22:09.530 "state": "enabled", 00:22:09.530 "thread": "nvmf_tgt_poll_group_000", 00:22:09.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:09.530 "listen_address": { 00:22:09.530 "trtype": "TCP", 00:22:09.530 "adrfam": "IPv4", 00:22:09.530 "traddr": "10.0.0.2", 00:22:09.530 "trsvcid": "4420" 00:22:09.530 }, 00:22:09.530 "peer_address": { 00:22:09.530 "trtype": "TCP", 00:22:09.530 "adrfam": "IPv4", 00:22:09.530 "traddr": "10.0.0.1", 00:22:09.530 "trsvcid": "56366" 00:22:09.530 }, 00:22:09.530 "auth": { 00:22:09.530 "state": "completed", 00:22:09.530 "digest": "sha512", 00:22:09.530 "dhgroup": "ffdhe8192" 00:22:09.530 } 00:22:09.530 } 00:22:09.530 ]' 00:22:09.530 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:09.793 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:09.793 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:09.793 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:09.793 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:09.793 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:09.793 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:09.793 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:10.053 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:22:10.053 05:22:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:YjQ5MTA0OTUzMmRiMzMwYjJhMzJjOTRiNTBlODkxMjcxZTM4YzA5OTJmNzhkMWU5g2dhTA==: --dhchap-ctrl-secret DHHC-1:03:OWRhZmMyZjE4MmY1YWQ4NzMyMThjZTk2ZDFkMGFlZWU2MTA2NjIyODU2OTc5MmRjYmE5ZmMxMTZjYzQwOTM4ZoL4Lew=: 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:10.621 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:10.621 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:22:10.880 request: 00:22:10.880 { 00:22:10.880 "name": "nvme0", 00:22:10.880 "trtype": "tcp", 00:22:10.880 "traddr": "10.0.0.2", 00:22:10.880 "adrfam": "ipv4", 00:22:10.880 "trsvcid": "4420", 00:22:10.880 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:10.880 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:10.880 "prchk_reftag": false, 00:22:10.880 "prchk_guard": false, 00:22:10.880 "hdgst": false, 00:22:10.880 "ddgst": false, 00:22:10.880 "dhchap_key": "key2", 00:22:10.880 "allow_unrecognized_csi": false, 00:22:10.880 "method": "bdev_nvme_attach_controller", 00:22:10.880 "req_id": 1 00:22:10.880 } 00:22:10.880 Got JSON-RPC error response 00:22:10.880 response: 00:22:10.880 { 00:22:10.880 "code": -5, 00:22:10.880 "message": "Input/output error" 00:22:10.880 } 00:22:10.880 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:10.880 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:10.880 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:10.880 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:10.880 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:10.880 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.880 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.880 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.880 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:10.881 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:10.881 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:10.881 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:10.881 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:10.881 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:10.881 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:10.881 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:10.881 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.881 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:10.881 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:10.881 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:10.881 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:10.881 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:22:11.447 request: 00:22:11.447 { 00:22:11.447 "name": "nvme0", 00:22:11.447 "trtype": "tcp", 00:22:11.447 "traddr": "10.0.0.2", 00:22:11.447 "adrfam": "ipv4", 00:22:11.447 "trsvcid": "4420", 00:22:11.447 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:11.447 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:11.447 "prchk_reftag": false, 00:22:11.447 "prchk_guard": false, 00:22:11.447 "hdgst": false, 00:22:11.447 "ddgst": false, 00:22:11.447 "dhchap_key": "key1", 00:22:11.447 "dhchap_ctrlr_key": "ckey2", 00:22:11.447 "allow_unrecognized_csi": false, 00:22:11.447 "method": "bdev_nvme_attach_controller", 00:22:11.447 "req_id": 1 00:22:11.447 } 00:22:11.447 Got JSON-RPC error response 00:22:11.447 response: 00:22:11.447 { 00:22:11.447 "code": -5, 00:22:11.447 "message": "Input/output error" 00:22:11.447 } 00:22:11.447 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:11.447 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:11.447 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:11.447 05:22:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:11.447 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:22:12.015 request: 00:22:12.015 { 00:22:12.015 "name": "nvme0", 00:22:12.015 "trtype": "tcp", 00:22:12.015 "traddr": "10.0.0.2", 00:22:12.015 "adrfam": "ipv4", 00:22:12.015 "trsvcid": "4420", 00:22:12.015 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:12.015 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:12.015 "prchk_reftag": false, 00:22:12.015 "prchk_guard": false, 00:22:12.015 "hdgst": false, 00:22:12.015 "ddgst": false, 00:22:12.015 "dhchap_key": "key1", 00:22:12.015 "dhchap_ctrlr_key": "ckey1", 00:22:12.015 "allow_unrecognized_csi": false, 00:22:12.015 "method": "bdev_nvme_attach_controller", 00:22:12.015 "req_id": 1 00:22:12.015 } 00:22:12.015 Got JSON-RPC error response 00:22:12.015 response: 00:22:12.015 { 00:22:12.015 "code": -5, 00:22:12.015 "message": "Input/output error" 00:22:12.015 } 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 314561 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 314561 ']' 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 314561 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314561 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314561' 00:22:12.015 killing process with pid 314561 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 314561 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 314561 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@509 -- # nvmfpid=336679 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@510 -- # waitforlisten 336679 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 336679 ']' 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.015 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 336679 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 336679 ']' 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:12.274 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:12.274 05:22:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.544 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:12.544 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:22:12.544 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:22:12.544 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.544 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.544 null0 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.2L0 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Xaj ]] 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Xaj 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.N5P 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.uV4 ]] 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.uV4 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.zDE 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.Yw4 ]] 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Yw4 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.23o 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:12.805 05:22:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:13.373 nvme0n1 00:22:13.634 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:22:13.634 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:22:13.634 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:13.634 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:13.634 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:22:13.634 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:13.634 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:13.634 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:13.634 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:22:13.634 { 00:22:13.634 "cntlid": 1, 00:22:13.634 "qid": 0, 00:22:13.634 "state": "enabled", 00:22:13.634 "thread": "nvmf_tgt_poll_group_000", 00:22:13.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:13.634 "listen_address": { 00:22:13.634 "trtype": "TCP", 00:22:13.634 "adrfam": "IPv4", 00:22:13.634 "traddr": "10.0.0.2", 00:22:13.634 "trsvcid": "4420" 00:22:13.634 }, 00:22:13.634 "peer_address": { 00:22:13.634 "trtype": "TCP", 00:22:13.634 "adrfam": "IPv4", 00:22:13.634 "traddr": "10.0.0.1", 00:22:13.634 "trsvcid": "56436" 00:22:13.634 }, 00:22:13.634 "auth": { 00:22:13.634 "state": "completed", 00:22:13.634 "digest": "sha512", 00:22:13.634 "dhgroup": "ffdhe8192" 00:22:13.634 } 00:22:13.634 } 00:22:13.634 ]' 00:22:13.634 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:22:13.893 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:22:13.893 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:22:13.893 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:22:13.893 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:22:13.893 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:22:13.893 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:13.893 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:14.152 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:22:14.153 05:22:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:22:14.720 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:14.721 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:14.721 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:14.721 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.721 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.721 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.721 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key3 00:22:14.721 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:14.721 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:14.721 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:14.721 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:22:14.721 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:14.980 request: 00:22:14.980 { 00:22:14.980 "name": "nvme0", 00:22:14.980 "trtype": "tcp", 00:22:14.980 "traddr": "10.0.0.2", 00:22:14.980 "adrfam": "ipv4", 00:22:14.980 "trsvcid": "4420", 00:22:14.980 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:14.980 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:14.980 "prchk_reftag": false, 00:22:14.980 "prchk_guard": false, 00:22:14.980 "hdgst": false, 00:22:14.980 "ddgst": false, 00:22:14.980 "dhchap_key": "key3", 00:22:14.980 "allow_unrecognized_csi": false, 00:22:14.980 "method": "bdev_nvme_attach_controller", 00:22:14.980 "req_id": 1 00:22:14.980 } 00:22:14.980 Got JSON-RPC error response 00:22:14.980 response: 00:22:14.980 { 00:22:14.980 "code": -5, 00:22:14.980 "message": "Input/output error" 00:22:14.980 } 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:14.980 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:22:15.239 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:22:15.239 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:15.239 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:22:15.239 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:15.239 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.239 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:15.239 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.239 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:22:15.239 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.239 05:22:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:22:15.497 request: 00:22:15.497 { 00:22:15.497 "name": "nvme0", 00:22:15.497 "trtype": "tcp", 00:22:15.497 "traddr": "10.0.0.2", 00:22:15.497 "adrfam": "ipv4", 00:22:15.497 "trsvcid": "4420", 00:22:15.497 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:15.498 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:15.498 "prchk_reftag": false, 00:22:15.498 "prchk_guard": false, 00:22:15.498 "hdgst": false, 00:22:15.498 "ddgst": false, 00:22:15.498 "dhchap_key": "key3", 00:22:15.498 "allow_unrecognized_csi": false, 00:22:15.498 "method": "bdev_nvme_attach_controller", 00:22:15.498 "req_id": 1 00:22:15.498 } 00:22:15.498 Got JSON-RPC error response 00:22:15.498 response: 00:22:15.498 { 00:22:15.498 "code": -5, 00:22:15.498 "message": "Input/output error" 00:22:15.498 } 00:22:15.498 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:15.498 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:15.498 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:15.498 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:15.498 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:15.498 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:22:15.498 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:22:15.498 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.498 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.498 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:15.757 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:16.016 request: 00:22:16.016 { 00:22:16.016 "name": "nvme0", 00:22:16.016 "trtype": "tcp", 00:22:16.016 "traddr": "10.0.0.2", 00:22:16.016 "adrfam": "ipv4", 00:22:16.016 "trsvcid": "4420", 00:22:16.016 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:16.016 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:16.016 "prchk_reftag": false, 00:22:16.016 "prchk_guard": false, 00:22:16.016 "hdgst": false, 00:22:16.016 "ddgst": false, 00:22:16.016 "dhchap_key": "key0", 00:22:16.016 "dhchap_ctrlr_key": "key1", 00:22:16.016 "allow_unrecognized_csi": false, 00:22:16.016 "method": "bdev_nvme_attach_controller", 00:22:16.016 "req_id": 1 00:22:16.016 } 00:22:16.016 Got JSON-RPC error response 00:22:16.016 response: 00:22:16.016 { 00:22:16.016 "code": -5, 00:22:16.016 "message": "Input/output error" 00:22:16.016 } 00:22:16.016 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:16.016 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:16.016 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:16.016 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:16.016 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:22:16.016 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:16.016 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:22:16.273 nvme0n1 00:22:16.273 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:22:16.273 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:22:16.273 05:22:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:16.531 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:16.531 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:16.532 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:16.791 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 00:22:16.791 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:16.791 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:16.791 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:16.791 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:16.791 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:16.791 05:22:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:17.359 nvme0n1 00:22:17.359 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:22:17.359 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:22:17.359 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.618 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.618 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:17.618 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:17.618 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:17.618 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:17.618 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:22:17.618 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:22:17.618 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:17.877 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:17.877 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:22:17.877 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid 80b56b8f-cbc7-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: --dhchap-ctrl-secret DHHC-1:03:ZTc4MWU1NTJmMDYxMzE3NGYyMzUzMzkwNDlmZjlmZmMzMzRmMzY2NDcyMGI0ODBhM2M4YThlNjNlZTQwMDU3OQZX8HA=: 00:22:18.444 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:22:18.444 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:22:18.444 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:22:18.444 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:22:18.444 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:22:18.444 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:22:18.444 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:22:18.444 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:18.445 05:22:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:18.704 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:22:18.704 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:18.704 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:22:18.704 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:22:18.704 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.704 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:22:18.704 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:18.704 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:22:18.704 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:18.704 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:22:18.962 request: 00:22:18.962 { 00:22:18.962 "name": "nvme0", 00:22:18.962 "trtype": "tcp", 00:22:18.962 "traddr": "10.0.0.2", 00:22:18.962 "adrfam": "ipv4", 00:22:18.962 "trsvcid": "4420", 00:22:18.962 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:22:18.962 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562", 00:22:18.962 "prchk_reftag": false, 00:22:18.962 "prchk_guard": false, 00:22:18.962 "hdgst": false, 00:22:18.962 "ddgst": false, 00:22:18.962 "dhchap_key": "key1", 00:22:18.963 "allow_unrecognized_csi": false, 00:22:18.963 "method": "bdev_nvme_attach_controller", 00:22:18.963 "req_id": 1 00:22:18.963 } 00:22:18.963 Got JSON-RPC error response 00:22:18.963 response: 00:22:18.963 { 00:22:18.963 "code": -5, 00:22:18.963 "message": "Input/output error" 00:22:18.963 } 00:22:18.963 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:18.963 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:18.963 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:18.963 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:18.963 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.963 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:18.963 05:22:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:19.900 nvme0n1 00:22:19.900 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:22:19.900 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:22:19.900 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:19.900 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:19.900 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:19.900 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.158 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:20.158 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.158 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.158 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.158 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:22:20.158 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:20.158 05:22:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:22:20.417 nvme0n1 00:22:20.417 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:22:20.417 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:20.417 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:22:20.676 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:20.676 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:22:20.676 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: '' 2s 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: ]] 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:Y2ZmYTZiYzkzOTQzMDMxOWVlZmU3MzU5Zjg3NmZkYWPFX3PJ: 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:20.935 05:22:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: 2s 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: ]] 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MTA5ZDUwZDc0OGU3YmIwNTRkNjc3ZDgwYWI4NGUxNzc2YzFhYTJlZmY2NGE0Mzc1UVLmaw==: 00:22:22.838 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:22:22.839 05:22:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:22:25.373 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:25.373 05:22:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:25.941 nvme0n1 00:22:25.941 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:25.941 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:25.941 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:25.941 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:25.941 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:25.941 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:26.200 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:22:26.200 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.200 05:22:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:22:26.458 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.458 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:26.458 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.458 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.458 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.458 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:22:26.458 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:22:26.718 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:22:26.718 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:22:26.718 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:26.977 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:22:26.977 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:26.977 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:26.977 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:26.977 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:26.977 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:26.977 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:26.977 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:26.977 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:26.977 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.978 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:26.978 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:26.978 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:26.978 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:22:27.546 request: 00:22:27.546 { 00:22:27.546 "name": "nvme0", 00:22:27.546 "dhchap_key": "key1", 00:22:27.546 "dhchap_ctrlr_key": "key3", 00:22:27.546 "method": "bdev_nvme_set_keys", 00:22:27.546 "req_id": 1 00:22:27.546 } 00:22:27.546 Got JSON-RPC error response 00:22:27.546 response: 00:22:27.546 { 00:22:27.546 "code": -13, 00:22:27.546 "message": "Permission denied" 00:22:27.546 } 00:22:27.546 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:27.546 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:27.546 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:27.546 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:27.546 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:27.546 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:27.546 05:22:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:27.546 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:22:27.546 05:22:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:22:28.483 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:22:28.483 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:22:28.483 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:28.742 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:22:28.742 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:22:28.742 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.742 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:28.742 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.742 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:28.742 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:28.742 05:22:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:22:29.680 nvme0n1 00:22:29.680 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:22:29.680 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.680 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:29.680 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.680 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:29.680 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:22:29.680 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:29.680 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:22:29.680 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.680 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:22:29.680 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:22:29.680 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:29.680 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:22:29.939 request: 00:22:29.939 { 00:22:29.939 "name": "nvme0", 00:22:29.939 "dhchap_key": "key2", 00:22:29.939 "dhchap_ctrlr_key": "key0", 00:22:29.939 "method": "bdev_nvme_set_keys", 00:22:29.939 "req_id": 1 00:22:29.939 } 00:22:29.939 Got JSON-RPC error response 00:22:29.939 response: 00:22:29.939 { 00:22:29.939 "code": -13, 00:22:29.939 "message": "Permission denied" 00:22:29.939 } 00:22:29.939 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:22:29.939 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:22:29.939 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:22:29.939 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:22:29.939 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:29.939 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:29.939 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:30.198 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:22:30.198 05:22:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:22:31.135 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:22:31.135 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:22:31.135 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:22:31.394 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:22:31.394 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:22:31.394 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:22:31.394 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 314585 00:22:31.394 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 314585 ']' 00:22:31.394 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 314585 00:22:31.394 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:31.394 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.394 05:22:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 314585 00:22:31.394 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:31.394 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:31.394 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 314585' 00:22:31.394 killing process with pid 314585 00:22:31.394 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 314585 00:22:31.394 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 314585 00:22:31.654 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:22:31.654 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:31.654 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # sync 00:22:31.654 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:31.654 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set +e 00:22:31.654 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:31.654 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:31.654 rmmod nvme_tcp 00:22:31.914 rmmod nvme_fabrics 00:22:31.914 rmmod nvme_keyring 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@128 -- # set -e 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # return 0 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@517 -- # '[' -n 336679 ']' 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@518 -- # killprocess 336679 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 336679 ']' 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 336679 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 336679 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 336679' 00:22:31.915 killing process with pid 336679 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 336679 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 336679 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # iptr 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-save 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:31.915 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@791 -- # iptables-restore 00:22:32.174 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:32.174 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:32.174 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:32.174 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:32.174 05:22:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.078 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:34.078 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.2L0 /tmp/spdk.key-sha256.N5P /tmp/spdk.key-sha384.zDE /tmp/spdk.key-sha512.23o /tmp/spdk.key-sha512.Xaj /tmp/spdk.key-sha384.uV4 /tmp/spdk.key-sha256.Yw4 '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:22:34.078 00:22:34.078 real 2m34.176s 00:22:34.078 user 5m54.749s 00:22:34.078 sys 0m23.946s 00:22:34.078 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:34.078 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:22:34.078 ************************************ 00:22:34.078 END TEST nvmf_auth_target 00:22:34.078 ************************************ 00:22:34.078 05:22:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:22:34.078 05:22:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:34.078 05:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:22:34.078 05:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:34.078 05:22:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:34.078 ************************************ 00:22:34.078 START TEST nvmf_bdevio_no_huge 00:22:34.078 ************************************ 00:22:34.078 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:22:34.339 * Looking for test storage... 00:22:34.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lcov --version 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:34.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.339 --rc genhtml_branch_coverage=1 00:22:34.339 --rc genhtml_function_coverage=1 00:22:34.339 --rc genhtml_legend=1 00:22:34.339 --rc geninfo_all_blocks=1 00:22:34.339 --rc geninfo_unexecuted_blocks=1 00:22:34.339 00:22:34.339 ' 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:34.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.339 --rc genhtml_branch_coverage=1 00:22:34.339 --rc genhtml_function_coverage=1 00:22:34.339 --rc genhtml_legend=1 00:22:34.339 --rc geninfo_all_blocks=1 00:22:34.339 --rc geninfo_unexecuted_blocks=1 00:22:34.339 00:22:34.339 ' 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:34.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.339 --rc genhtml_branch_coverage=1 00:22:34.339 --rc genhtml_function_coverage=1 00:22:34.339 --rc genhtml_legend=1 00:22:34.339 --rc geninfo_all_blocks=1 00:22:34.339 --rc geninfo_unexecuted_blocks=1 00:22:34.339 00:22:34.339 ' 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:34.339 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:34.339 --rc genhtml_branch_coverage=1 00:22:34.339 --rc genhtml_function_coverage=1 00:22:34.339 --rc genhtml_legend=1 00:22:34.339 --rc geninfo_all_blocks=1 00:22:34.339 --rc geninfo_unexecuted_blocks=1 00:22:34.339 00:22:34.339 ' 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.339 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # : 0 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:34.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # xtrace_disable 00:22:34.340 05:22:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # pci_devs=() 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # net_devs=() 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # e810=() 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # local -ga e810 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # x722=() 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # local -ga x722 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # mlx=() 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@322 -- # local -ga mlx 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:40.908 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:40.908 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:40.909 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:40.909 Found net devices under 0000:af:00.0: cvl_0_0 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:40.909 Found net devices under 0000:af:00.1: cvl_0_1 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@442 -- # is_hw=yes 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:40.909 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:40.909 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:22:40.909 00:22:40.909 --- 10.0.0.2 ping statistics --- 00:22:40.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.909 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:40.909 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:40.909 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:22:40.909 00:22:40.909 --- 10.0.0.1 ping statistics --- 00:22:40.909 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:40.909 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # return 0 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@509 -- # nvmfpid=343406 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@510 -- # waitforlisten 343406 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 343406 ']' 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:40.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:40.909 05:22:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.909 [2024-12-15 05:22:53.820170] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:40.909 [2024-12-15 05:22:53.820219] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:22:40.909 [2024-12-15 05:22:53.901814] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:40.909 [2024-12-15 05:22:53.937572] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:40.909 [2024-12-15 05:22:53.937601] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:40.909 [2024-12-15 05:22:53.937608] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:40.909 [2024-12-15 05:22:53.937614] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:40.909 [2024-12-15 05:22:53.937618] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:40.909 [2024-12-15 05:22:53.938712] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:22:40.909 [2024-12-15 05:22:53.938819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:22:40.909 [2024-12-15 05:22:53.938927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:22:40.909 [2024-12-15 05:22:53.938927] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.910 [2024-12-15 05:22:54.087872] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.910 Malloc0 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:40.910 [2024-12-15 05:22:54.132188] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # config=() 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@560 -- # local subsystem config 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:22:40.910 { 00:22:40.910 "params": { 00:22:40.910 "name": "Nvme$subsystem", 00:22:40.910 "trtype": "$TEST_TRANSPORT", 00:22:40.910 "traddr": "$NVMF_FIRST_TARGET_IP", 00:22:40.910 "adrfam": "ipv4", 00:22:40.910 "trsvcid": "$NVMF_PORT", 00:22:40.910 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:22:40.910 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:22:40.910 "hdgst": ${hdgst:-false}, 00:22:40.910 "ddgst": ${ddgst:-false} 00:22:40.910 }, 00:22:40.910 "method": "bdev_nvme_attach_controller" 00:22:40.910 } 00:22:40.910 EOF 00:22:40.910 )") 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@582 -- # cat 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@584 -- # jq . 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@585 -- # IFS=, 00:22:40.910 05:22:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:22:40.910 "params": { 00:22:40.910 "name": "Nvme1", 00:22:40.910 "trtype": "tcp", 00:22:40.910 "traddr": "10.0.0.2", 00:22:40.910 "adrfam": "ipv4", 00:22:40.910 "trsvcid": "4420", 00:22:40.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:22:40.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:22:40.910 "hdgst": false, 00:22:40.910 "ddgst": false 00:22:40.910 }, 00:22:40.910 "method": "bdev_nvme_attach_controller" 00:22:40.910 }' 00:22:40.910 [2024-12-15 05:22:54.181258] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:40.910 [2024-12-15 05:22:54.181304] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid343519 ] 00:22:40.910 [2024-12-15 05:22:54.256553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:40.910 [2024-12-15 05:22:54.293861] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:40.910 [2024-12-15 05:22:54.293969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:22:40.910 [2024-12-15 05:22:54.293969] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:22:40.910 I/O targets: 00:22:40.910 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:22:40.910 00:22:40.910 00:22:40.910 CUnit - A unit testing framework for C - Version 2.1-3 00:22:40.910 http://cunit.sourceforge.net/ 00:22:40.910 00:22:40.910 00:22:40.910 Suite: bdevio tests on: Nvme1n1 00:22:40.910 Test: blockdev write read block ...passed 00:22:40.910 Test: blockdev write zeroes read block ...passed 00:22:40.910 Test: blockdev write zeroes read no split ...passed 00:22:40.910 Test: blockdev write zeroes read split ...passed 00:22:40.910 Test: blockdev write zeroes read split partial ...passed 00:22:40.910 Test: blockdev reset ...[2024-12-15 05:22:54.570971] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:40.910 [2024-12-15 05:22:54.571039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2060ea0 (9): Bad file descriptor 00:22:40.910 [2024-12-15 05:22:54.589895] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:22:40.910 passed 00:22:40.910 Test: blockdev write read 8 blocks ...passed 00:22:40.910 Test: blockdev write read size > 128k ...passed 00:22:40.910 Test: blockdev write read invalid size ...passed 00:22:41.169 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:22:41.170 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:22:41.170 Test: blockdev write read max offset ...passed 00:22:41.170 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:22:41.170 Test: blockdev writev readv 8 blocks ...passed 00:22:41.170 Test: blockdev writev readv 30 x 1block ...passed 00:22:41.170 Test: blockdev writev readv block ...passed 00:22:41.170 Test: blockdev writev readv size > 128k ...passed 00:22:41.170 Test: blockdev writev readv size > 128k in two iovs ...passed 00:22:41.170 Test: blockdev comparev and writev ...[2024-12-15 05:22:54.760667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.170 [2024-12-15 05:22:54.760701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:41.170 [2024-12-15 05:22:54.760716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.170 [2024-12-15 05:22:54.760723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:41.170 [2024-12-15 05:22:54.760960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.170 [2024-12-15 05:22:54.760971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:41.170 [2024-12-15 05:22:54.760982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.170 [2024-12-15 05:22:54.760990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:41.170 [2024-12-15 05:22:54.761226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.170 [2024-12-15 05:22:54.761237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:41.170 [2024-12-15 05:22:54.761248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.170 [2024-12-15 05:22:54.761255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:41.170 [2024-12-15 05:22:54.761488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.170 [2024-12-15 05:22:54.761499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:41.170 [2024-12-15 05:22:54.761511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:22:41.170 [2024-12-15 05:22:54.761518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:41.170 passed 00:22:41.170 Test: blockdev nvme passthru rw ...passed 00:22:41.170 Test: blockdev nvme passthru vendor specific ...[2024-12-15 05:22:54.843376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:41.170 [2024-12-15 05:22:54.843392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:41.170 [2024-12-15 05:22:54.843492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:41.170 [2024-12-15 05:22:54.843502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:41.170 [2024-12-15 05:22:54.843599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:41.170 [2024-12-15 05:22:54.843609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:41.170 [2024-12-15 05:22:54.843708] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:41.170 [2024-12-15 05:22:54.843718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:41.170 passed 00:22:41.429 Test: blockdev nvme admin passthru ...passed 00:22:41.429 Test: blockdev copy ...passed 00:22:41.429 00:22:41.429 Run Summary: Type Total Ran Passed Failed Inactive 00:22:41.429 suites 1 1 n/a 0 0 00:22:41.429 tests 23 23 23 0 0 00:22:41.429 asserts 152 152 152 0 n/a 00:22:41.429 00:22:41.429 Elapsed time = 0.910 seconds 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@516 -- # nvmfcleanup 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # sync 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set +e 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # for i in {1..20} 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:22:41.687 rmmod nvme_tcp 00:22:41.687 rmmod nvme_fabrics 00:22:41.687 rmmod nvme_keyring 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@128 -- # set -e 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # return 0 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@517 -- # '[' -n 343406 ']' 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@518 -- # killprocess 343406 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 343406 ']' 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 343406 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 343406 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 343406' 00:22:41.687 killing process with pid 343406 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 343406 00:22:41.687 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 343406 00:22:41.946 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:22:41.946 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:22:41.946 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:22:41.946 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # iptr 00:22:41.946 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-save 00:22:41.946 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:22:41.946 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@791 -- # iptables-restore 00:22:41.946 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:22:41.946 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # remove_spdk_ns 00:22:41.946 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:41.946 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:41.946 05:22:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:22:44.483 00:22:44.483 real 0m9.879s 00:22:44.483 user 0m9.585s 00:22:44.483 sys 0m5.268s 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:22:44.483 ************************************ 00:22:44.483 END TEST nvmf_bdevio_no_huge 00:22:44.483 ************************************ 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:44.483 ************************************ 00:22:44.483 START TEST nvmf_tls 00:22:44.483 ************************************ 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:22:44.483 * Looking for test storage... 00:22:44.483 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lcov --version 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:44.483 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:22:44.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.484 --rc genhtml_branch_coverage=1 00:22:44.484 --rc genhtml_function_coverage=1 00:22:44.484 --rc genhtml_legend=1 00:22:44.484 --rc geninfo_all_blocks=1 00:22:44.484 --rc geninfo_unexecuted_blocks=1 00:22:44.484 00:22:44.484 ' 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:22:44.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.484 --rc genhtml_branch_coverage=1 00:22:44.484 --rc genhtml_function_coverage=1 00:22:44.484 --rc genhtml_legend=1 00:22:44.484 --rc geninfo_all_blocks=1 00:22:44.484 --rc geninfo_unexecuted_blocks=1 00:22:44.484 00:22:44.484 ' 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:22:44.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.484 --rc genhtml_branch_coverage=1 00:22:44.484 --rc genhtml_function_coverage=1 00:22:44.484 --rc genhtml_legend=1 00:22:44.484 --rc geninfo_all_blocks=1 00:22:44.484 --rc geninfo_unexecuted_blocks=1 00:22:44.484 00:22:44.484 ' 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:22:44.484 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:44.484 --rc genhtml_branch_coverage=1 00:22:44.484 --rc genhtml_function_coverage=1 00:22:44.484 --rc genhtml_legend=1 00:22:44.484 --rc geninfo_all_blocks=1 00:22:44.484 --rc geninfo_unexecuted_blocks=1 00:22:44.484 00:22:44.484 ' 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # : 0 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:22:44.484 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@55 -- # have_pci_nics=0 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@476 -- # prepare_net_devs 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@438 -- # local -g is_hw=no 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@440 -- # remove_spdk_ns 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # xtrace_disable 00:22:44.484 05:22:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # pci_devs=() 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # local -a pci_devs 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # pci_net_devs=() 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # pci_drivers=() 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # local -A pci_drivers 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # net_devs=() 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@319 -- # local -ga net_devs 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # e810=() 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # local -ga e810 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # x722=() 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # local -ga x722 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # mlx=() 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@322 -- # local -ga mlx 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:22:51.054 Found 0000:af:00.0 (0x8086 - 0x159b) 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.054 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:22:51.055 Found 0000:af:00.1 (0x8086 - 0x159b) 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:22:51.055 Found net devices under 0000:af:00.0: cvl_0_0 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # [[ up == up ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:22:51.055 Found net devices under 0000:af:00.1: cvl_0_1 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@442 -- # is_hw=yes 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:22:51.055 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:51.055 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:22:51.055 00:22:51.055 --- 10.0.0.2 ping statistics --- 00:22:51.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.055 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:51.055 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:51.055 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:22:51.055 00:22:51.055 --- 10.0.0.1 ping statistics --- 00:22:51.055 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:51.055 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # return 0 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=347128 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 347128 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 347128 ']' 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:51.055 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:51.055 05:23:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.055 [2024-12-15 05:23:03.860259] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:22:51.055 [2024-12-15 05:23:03.860309] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:51.055 [2024-12-15 05:23:03.939390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:51.055 [2024-12-15 05:23:03.961431] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:51.055 [2024-12-15 05:23:03.961468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:51.055 [2024-12-15 05:23:03.961476] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:51.055 [2024-12-15 05:23:03.961484] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:51.055 [2024-12-15 05:23:03.961489] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:51.055 [2024-12-15 05:23:03.961971] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:22:51.055 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:51.055 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:22:51.055 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:22:51.055 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:51.055 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:22:51.055 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:51.056 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # '[' tcp '!=' tcp ']' 00:22:51.056 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:22:51.056 true 00:22:51.056 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.056 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # jq -r .tls_version 00:22:51.056 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # version=0 00:22:51.056 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@75 -- # [[ 0 != \0 ]] 00:22:51.056 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:51.056 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.056 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # jq -r .tls_version 00:22:51.315 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # version=13 00:22:51.315 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@83 -- # [[ 13 != \1\3 ]] 00:22:51.315 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:22:51.315 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.315 05:23:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # jq -r .tls_version 00:22:51.577 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # version=7 00:22:51.577 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@91 -- # [[ 7 != \7 ]] 00:22:51.577 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.577 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # jq -r .enable_ktls 00:22:51.837 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # ktls=false 00:22:51.837 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@98 -- # [[ false != \f\a\l\s\e ]] 00:22:51.837 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:22:51.837 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:51.837 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # jq -r .enable_ktls 00:22:52.096 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # ktls=true 00:22:52.096 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@106 -- # [[ true != \t\r\u\e ]] 00:22:52.096 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:22:52.355 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:22:52.355 05:23:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # jq -r .enable_ktls 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # ktls=false 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # [[ false != \f\a\l\s\e ]] 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=ffeeddccbbaa99887766554433221100 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=1 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_path=/tmp/tmp.Sn95Hk8dQE 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # mktemp 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # key_2_path=/tmp/tmp.Abm8grStfn 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.Sn95Hk8dQE 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # chmod 0600 /tmp/tmp.Abm8grStfn 00:22:52.614 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:22:52.872 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@132 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:22:53.131 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@134 -- # setup_nvmf_tgt /tmp/tmp.Sn95Hk8dQE 00:22:53.131 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Sn95Hk8dQE 00:22:53.131 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:53.131 [2024-12-15 05:23:06.750258] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:53.131 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:22:53.390 05:23:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:22:53.649 [2024-12-15 05:23:07.127233] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:53.649 [2024-12-15 05:23:07.127439] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:53.649 05:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:22:53.649 malloc0 00:22:53.649 05:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:22:53.908 05:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Sn95Hk8dQE 00:22:54.167 05:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:22:54.426 05:23:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@138 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Sn95Hk8dQE 00:23:04.407 Initializing NVMe Controllers 00:23:04.407 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:04.407 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:04.407 Initialization complete. Launching workers. 00:23:04.407 ======================================================== 00:23:04.407 Latency(us) 00:23:04.407 Device Information : IOPS MiB/s Average min max 00:23:04.407 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16998.98 66.40 3764.98 800.90 6088.89 00:23:04.407 ======================================================== 00:23:04.407 Total : 16998.98 66.40 3764.98 800.90 6088.89 00:23:04.407 00:23:04.407 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@144 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sn95Hk8dQE 00:23:04.407 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:04.407 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:04.407 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:04.407 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Sn95Hk8dQE 00:23:04.407 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:04.407 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=349587 00:23:04.407 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:04.408 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:04.408 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 349587 /var/tmp/bdevperf.sock 00:23:04.408 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 349587 ']' 00:23:04.408 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:04.408 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:04.408 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:04.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:04.408 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:04.408 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:04.408 [2024-12-15 05:23:18.069814] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:04.408 [2024-12-15 05:23:18.069861] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid349587 ] 00:23:04.667 [2024-12-15 05:23:18.144197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:04.667 [2024-12-15 05:23:18.166967] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:04.667 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:04.667 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:04.667 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Sn95Hk8dQE 00:23:04.926 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:04.926 [2024-12-15 05:23:18.606939] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:05.185 TLSTESTn1 00:23:05.185 05:23:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:05.185 Running I/O for 10 seconds... 00:23:07.500 4994.00 IOPS, 19.51 MiB/s [2024-12-15T04:23:22.124Z] 5207.50 IOPS, 20.34 MiB/s [2024-12-15T04:23:23.061Z] 5347.67 IOPS, 20.89 MiB/s [2024-12-15T04:23:23.997Z] 5361.25 IOPS, 20.94 MiB/s [2024-12-15T04:23:24.933Z] 5426.40 IOPS, 21.20 MiB/s [2024-12-15T04:23:25.868Z] 5363.17 IOPS, 20.95 MiB/s [2024-12-15T04:23:26.805Z] 5239.86 IOPS, 20.47 MiB/s [2024-12-15T04:23:28.181Z] 5200.50 IOPS, 20.31 MiB/s [2024-12-15T04:23:29.116Z] 5252.44 IOPS, 20.52 MiB/s [2024-12-15T04:23:29.116Z] 5285.70 IOPS, 20.65 MiB/s 00:23:15.429 Latency(us) 00:23:15.429 [2024-12-15T04:23:29.116Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.429 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:15.429 Verification LBA range: start 0x0 length 0x2000 00:23:15.429 TLSTESTn1 : 10.01 5291.39 20.67 0.00 0.00 24155.44 5679.79 32206.26 00:23:15.429 [2024-12-15T04:23:29.116Z] =================================================================================================================== 00:23:15.430 [2024-12-15T04:23:29.117Z] Total : 5291.39 20.67 0.00 0.00 24155.44 5679.79 32206.26 00:23:15.430 { 00:23:15.430 "results": [ 00:23:15.430 { 00:23:15.430 "job": "TLSTESTn1", 00:23:15.430 "core_mask": "0x4", 00:23:15.430 "workload": "verify", 00:23:15.430 "status": "finished", 00:23:15.430 "verify_range": { 00:23:15.430 "start": 0, 00:23:15.430 "length": 8192 00:23:15.430 }, 00:23:15.430 "queue_depth": 128, 00:23:15.430 "io_size": 4096, 00:23:15.430 "runtime": 10.01325, 00:23:15.430 "iops": 5291.388909694655, 00:23:15.430 "mibps": 20.669487928494746, 00:23:15.430 "io_failed": 0, 00:23:15.430 "io_timeout": 0, 00:23:15.430 "avg_latency_us": 24155.43604437638, 00:23:15.430 "min_latency_us": 5679.786666666667, 00:23:15.430 "max_latency_us": 32206.262857142858 00:23:15.430 } 00:23:15.430 ], 00:23:15.430 "core_count": 1 00:23:15.430 } 00:23:15.430 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:15.430 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 349587 00:23:15.430 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 349587 ']' 00:23:15.430 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 349587 00:23:15.430 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:15.430 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:15.430 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 349587 00:23:15.430 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:15.430 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:15.430 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 349587' 00:23:15.430 killing process with pid 349587 00:23:15.430 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 349587 00:23:15.430 Received shutdown signal, test time was about 10.000000 seconds 00:23:15.430 00:23:15.430 Latency(us) 00:23:15.430 [2024-12-15T04:23:29.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:15.430 [2024-12-15T04:23:29.117Z] =================================================================================================================== 00:23:15.430 [2024-12-15T04:23:29.117Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:15.430 05:23:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 349587 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@147 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Abm8grStfn 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Abm8grStfn 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Abm8grStfn 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Abm8grStfn 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351250 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351250 /var/tmp/bdevperf.sock 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351250 ']' 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:15.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:15.430 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:15.430 [2024-12-15 05:23:29.102063] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:15.430 [2024-12-15 05:23:29.102107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351250 ] 00:23:15.688 [2024-12-15 05:23:29.175518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:15.688 [2024-12-15 05:23:29.198103] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.688 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.688 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:15.688 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Abm8grStfn 00:23:15.947 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:16.206 [2024-12-15 05:23:29.653095] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:16.206 [2024-12-15 05:23:29.657734] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:16.206 [2024-12-15 05:23:29.658370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac1340 (107): Transport endpoint is not connected 00:23:16.206 [2024-12-15 05:23:29.659362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac1340 (9): Bad file descriptor 00:23:16.206 [2024-12-15 05:23:29.660363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:16.206 [2024-12-15 05:23:29.660382] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:16.206 [2024-12-15 05:23:29.660390] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:16.206 [2024-12-15 05:23:29.660398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:16.206 request: 00:23:16.206 { 00:23:16.206 "name": "TLSTEST", 00:23:16.206 "trtype": "tcp", 00:23:16.206 "traddr": "10.0.0.2", 00:23:16.206 "adrfam": "ipv4", 00:23:16.206 "trsvcid": "4420", 00:23:16.206 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:16.206 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:16.206 "prchk_reftag": false, 00:23:16.206 "prchk_guard": false, 00:23:16.206 "hdgst": false, 00:23:16.206 "ddgst": false, 00:23:16.206 "psk": "key0", 00:23:16.206 "allow_unrecognized_csi": false, 00:23:16.206 "method": "bdev_nvme_attach_controller", 00:23:16.206 "req_id": 1 00:23:16.206 } 00:23:16.206 Got JSON-RPC error response 00:23:16.207 response: 00:23:16.207 { 00:23:16.207 "code": -5, 00:23:16.207 "message": "Input/output error" 00:23:16.207 } 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351250 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351250 ']' 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351250 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351250 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351250' 00:23:16.207 killing process with pid 351250 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351250 00:23:16.207 Received shutdown signal, test time was about 10.000000 seconds 00:23:16.207 00:23:16.207 Latency(us) 00:23:16.207 [2024-12-15T04:23:29.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:16.207 [2024-12-15T04:23:29.894Z] =================================================================================================================== 00:23:16.207 [2024-12-15T04:23:29.894Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351250 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@150 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Sn95Hk8dQE 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Sn95Hk8dQE 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Sn95Hk8dQE 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Sn95Hk8dQE 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351424 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351424 /var/tmp/bdevperf.sock 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351424 ']' 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:16.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:16.207 05:23:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:16.466 [2024-12-15 05:23:29.919499] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:16.466 [2024-12-15 05:23:29.919546] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351424 ] 00:23:16.466 [2024-12-15 05:23:29.981553] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:16.466 [2024-12-15 05:23:30.000843] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:16.466 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:16.466 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:16.466 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Sn95Hk8dQE 00:23:16.826 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:23:17.120 [2024-12-15 05:23:30.478072] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.120 [2024-12-15 05:23:30.489474] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:17.120 [2024-12-15 05:23:30.489497] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:23:17.120 [2024-12-15 05:23:30.489520] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:17.120 [2024-12-15 05:23:30.490234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1726340 (107): Transport endpoint is not connected 00:23:17.120 [2024-12-15 05:23:30.491227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1726340 (9): Bad file descriptor 00:23:17.120 [2024-12-15 05:23:30.492229] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:23:17.120 [2024-12-15 05:23:30.492247] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:17.120 [2024-12-15 05:23:30.492254] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:23:17.120 [2024-12-15 05:23:30.492262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:23:17.120 request: 00:23:17.120 { 00:23:17.120 "name": "TLSTEST", 00:23:17.120 "trtype": "tcp", 00:23:17.120 "traddr": "10.0.0.2", 00:23:17.120 "adrfam": "ipv4", 00:23:17.120 "trsvcid": "4420", 00:23:17.120 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:17.120 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:23:17.120 "prchk_reftag": false, 00:23:17.121 "prchk_guard": false, 00:23:17.121 "hdgst": false, 00:23:17.121 "ddgst": false, 00:23:17.121 "psk": "key0", 00:23:17.121 "allow_unrecognized_csi": false, 00:23:17.121 "method": "bdev_nvme_attach_controller", 00:23:17.121 "req_id": 1 00:23:17.121 } 00:23:17.121 Got JSON-RPC error response 00:23:17.121 response: 00:23:17.121 { 00:23:17.121 "code": -5, 00:23:17.121 "message": "Input/output error" 00:23:17.121 } 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351424 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351424 ']' 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351424 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351424 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351424' 00:23:17.121 killing process with pid 351424 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351424 00:23:17.121 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.121 00:23:17.121 Latency(us) 00:23:17.121 [2024-12-15T04:23:30.808Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.121 [2024-12-15T04:23:30.808Z] =================================================================================================================== 00:23:17.121 [2024-12-15T04:23:30.808Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351424 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@153 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sn95Hk8dQE 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sn95Hk8dQE 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Sn95Hk8dQE 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Sn95Hk8dQE 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351650 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351650 /var/tmp/bdevperf.sock 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351650 ']' 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.121 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.121 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.121 [2024-12-15 05:23:30.773688] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:17.121 [2024-12-15 05:23:30.773733] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351650 ] 00:23:17.401 [2024-12-15 05:23:30.847299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:17.401 [2024-12-15 05:23:30.870022] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:17.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:17.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:17.401 05:23:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Sn95Hk8dQE 00:23:17.684 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:17.684 [2024-12-15 05:23:31.301092] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:17.684 [2024-12-15 05:23:31.311552] tcp.c: 987:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:17.684 [2024-12-15 05:23:31.311573] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:23:17.684 [2024-12-15 05:23:31.311595] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:23:17.684 [2024-12-15 05:23:31.312280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a1340 (107): Transport endpoint is not connected 00:23:17.684 [2024-12-15 05:23:31.313274] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15a1340 (9): Bad file descriptor 00:23:17.684 [2024-12-15 05:23:31.314276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:23:17.684 [2024-12-15 05:23:31.314287] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:23:17.684 [2024-12-15 05:23:31.314294] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:23:17.684 [2024-12-15 05:23:31.314303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:23:17.684 request: 00:23:17.684 { 00:23:17.684 "name": "TLSTEST", 00:23:17.684 "trtype": "tcp", 00:23:17.684 "traddr": "10.0.0.2", 00:23:17.684 "adrfam": "ipv4", 00:23:17.684 "trsvcid": "4420", 00:23:17.684 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:23:17.684 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:17.684 "prchk_reftag": false, 00:23:17.684 "prchk_guard": false, 00:23:17.684 "hdgst": false, 00:23:17.684 "ddgst": false, 00:23:17.684 "psk": "key0", 00:23:17.684 "allow_unrecognized_csi": false, 00:23:17.684 "method": "bdev_nvme_attach_controller", 00:23:17.684 "req_id": 1 00:23:17.684 } 00:23:17.684 Got JSON-RPC error response 00:23:17.684 response: 00:23:17.684 { 00:23:17.684 "code": -5, 00:23:17.684 "message": "Input/output error" 00:23:17.684 } 00:23:17.684 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351650 00:23:17.684 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351650 ']' 00:23:17.684 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351650 00:23:17.684 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:17.684 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:17.684 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351650 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351650' 00:23:17.972 killing process with pid 351650 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351650 00:23:17.972 Received shutdown signal, test time was about 10.000000 seconds 00:23:17.972 00:23:17.972 Latency(us) 00:23:17.972 [2024-12-15T04:23:31.659Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:17.972 [2024-12-15T04:23:31.659Z] =================================================================================================================== 00:23:17.972 [2024-12-15T04:23:31.659Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351650 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=351676 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 351676 /var/tmp/bdevperf.sock 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351676 ']' 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:17.972 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:17.972 [2024-12-15 05:23:31.580859] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:17.972 [2024-12-15 05:23:31.580904] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid351676 ] 00:23:17.972 [2024-12-15 05:23:31.640422] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:18.264 [2024-12-15 05:23:31.661945] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:18.264 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:18.264 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:18.264 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:23:18.264 [2024-12-15 05:23:31.920364] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:23:18.264 [2024-12-15 05:23:31.920391] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:18.264 request: 00:23:18.264 { 00:23:18.264 "name": "key0", 00:23:18.264 "path": "", 00:23:18.264 "method": "keyring_file_add_key", 00:23:18.264 "req_id": 1 00:23:18.264 } 00:23:18.264 Got JSON-RPC error response 00:23:18.264 response: 00:23:18.264 { 00:23:18.264 "code": -1, 00:23:18.264 "message": "Operation not permitted" 00:23:18.264 } 00:23:18.546 05:23:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:18.546 [2024-12-15 05:23:32.112947] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:18.546 [2024-12-15 05:23:32.112981] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:18.546 request: 00:23:18.546 { 00:23:18.546 "name": "TLSTEST", 00:23:18.546 "trtype": "tcp", 00:23:18.546 "traddr": "10.0.0.2", 00:23:18.546 "adrfam": "ipv4", 00:23:18.546 "trsvcid": "4420", 00:23:18.546 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:18.546 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:18.546 "prchk_reftag": false, 00:23:18.546 "prchk_guard": false, 00:23:18.546 "hdgst": false, 00:23:18.546 "ddgst": false, 00:23:18.546 "psk": "key0", 00:23:18.546 "allow_unrecognized_csi": false, 00:23:18.546 "method": "bdev_nvme_attach_controller", 00:23:18.546 "req_id": 1 00:23:18.546 } 00:23:18.546 Got JSON-RPC error response 00:23:18.546 response: 00:23:18.546 { 00:23:18.546 "code": -126, 00:23:18.546 "message": "Required key not available" 00:23:18.546 } 00:23:18.546 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 351676 00:23:18.546 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351676 ']' 00:23:18.546 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351676 00:23:18.546 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:18.546 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.546 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351676 00:23:18.546 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:18.547 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:18.547 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351676' 00:23:18.547 killing process with pid 351676 00:23:18.547 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351676 00:23:18.547 Received shutdown signal, test time was about 10.000000 seconds 00:23:18.547 00:23:18.547 Latency(us) 00:23:18.547 [2024-12-15T04:23:32.234Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:18.547 [2024-12-15T04:23:32.234Z] =================================================================================================================== 00:23:18.547 [2024-12-15T04:23:32.234Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:18.547 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351676 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # killprocess 347128 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 347128 ']' 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 347128 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 347128 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 347128' 00:23:18.840 killing process with pid 347128 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 347128 00:23:18.840 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 347128 00:23:19.123 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@730 -- # local prefix key digest 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@732 -- # digest=2 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@733 -- # python - 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # mktemp 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # key_long_path=/tmp/tmp.eXxkOEzbcm 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # chmod 0600 /tmp/tmp.eXxkOEzbcm 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@164 -- # nvmfappstart -m 0x2 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=351923 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 351923 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 351923 ']' 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:19.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:19.124 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.124 [2024-12-15 05:23:32.643027] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:19.124 [2024-12-15 05:23:32.643071] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:19.124 [2024-12-15 05:23:32.717639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:19.124 [2024-12-15 05:23:32.738044] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:19.124 [2024-12-15 05:23:32.738076] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:19.124 [2024-12-15 05:23:32.738083] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:19.124 [2024-12-15 05:23:32.738089] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:19.124 [2024-12-15 05:23:32.738094] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:19.124 [2024-12-15 05:23:32.738557] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:19.443 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:19.443 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:19.443 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:19.443 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:19.443 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:19.443 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:19.443 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # setup_nvmf_tgt /tmp/tmp.eXxkOEzbcm 00:23:19.443 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.eXxkOEzbcm 00:23:19.443 05:23:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:19.443 [2024-12-15 05:23:33.036536] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:19.443 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:19.729 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:20.022 [2024-12-15 05:23:33.413527] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:20.022 [2024-12-15 05:23:33.413722] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:20.022 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:20.022 malloc0 00:23:20.022 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:20.328 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.eXxkOEzbcm 00:23:20.328 05:23:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@168 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eXxkOEzbcm 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.eXxkOEzbcm 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=352177 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 352177 /var/tmp/bdevperf.sock 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 352177 ']' 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:20.633 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:20.633 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:20.633 [2024-12-15 05:23:34.189425] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:20.633 [2024-12-15 05:23:34.189471] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid352177 ] 00:23:20.633 [2024-12-15 05:23:34.262666] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:20.633 [2024-12-15 05:23:34.284772] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:20.929 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:20.929 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:20.929 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eXxkOEzbcm 00:23:20.929 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:21.188 [2024-12-15 05:23:34.756129] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:21.188 TLSTESTn1 00:23:21.188 05:23:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:21.447 Running I/O for 10 seconds... 00:23:23.317 5240.00 IOPS, 20.47 MiB/s [2024-12-15T04:23:38.382Z] 5260.00 IOPS, 20.55 MiB/s [2024-12-15T04:23:38.948Z] 5193.00 IOPS, 20.29 MiB/s [2024-12-15T04:23:40.325Z] 5212.50 IOPS, 20.36 MiB/s [2024-12-15T04:23:41.262Z] 5209.00 IOPS, 20.35 MiB/s [2024-12-15T04:23:42.200Z] 5220.17 IOPS, 20.39 MiB/s [2024-12-15T04:23:43.138Z] 5258.43 IOPS, 20.54 MiB/s [2024-12-15T04:23:44.076Z] 5297.00 IOPS, 20.69 MiB/s [2024-12-15T04:23:45.011Z] 5301.22 IOPS, 20.71 MiB/s [2024-12-15T04:23:45.011Z] 5256.50 IOPS, 20.53 MiB/s 00:23:31.324 Latency(us) 00:23:31.324 [2024-12-15T04:23:45.011Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.324 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:31.324 Verification LBA range: start 0x0 length 0x2000 00:23:31.324 TLSTESTn1 : 10.04 5247.93 20.50 0.00 0.00 24331.21 5367.71 39446.43 00:23:31.324 [2024-12-15T04:23:45.011Z] =================================================================================================================== 00:23:31.324 [2024-12-15T04:23:45.011Z] Total : 5247.93 20.50 0.00 0.00 24331.21 5367.71 39446.43 00:23:31.324 { 00:23:31.324 "results": [ 00:23:31.324 { 00:23:31.324 "job": "TLSTESTn1", 00:23:31.324 "core_mask": "0x4", 00:23:31.324 "workload": "verify", 00:23:31.324 "status": "finished", 00:23:31.324 "verify_range": { 00:23:31.324 "start": 0, 00:23:31.324 "length": 8192 00:23:31.324 }, 00:23:31.324 "queue_depth": 128, 00:23:31.324 "io_size": 4096, 00:23:31.324 "runtime": 10.04053, 00:23:31.324 "iops": 5247.930139146041, 00:23:31.324 "mibps": 20.499727106039224, 00:23:31.324 "io_failed": 0, 00:23:31.324 "io_timeout": 0, 00:23:31.324 "avg_latency_us": 24331.208649148874, 00:23:31.324 "min_latency_us": 5367.710476190477, 00:23:31.324 "max_latency_us": 39446.43047619048 00:23:31.324 } 00:23:31.324 ], 00:23:31.324 "core_count": 1 00:23:31.324 } 00:23:31.583 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:31.583 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 352177 00:23:31.583 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 352177 ']' 00:23:31.583 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 352177 00:23:31.583 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:31.583 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:31.583 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 352177 00:23:31.583 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:31.583 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:31.583 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 352177' 00:23:31.583 killing process with pid 352177 00:23:31.583 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 352177 00:23:31.583 Received shutdown signal, test time was about 10.000000 seconds 00:23:31.583 00:23:31.583 Latency(us) 00:23:31.583 [2024-12-15T04:23:45.270Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:31.583 [2024-12-15T04:23:45.270Z] =================================================================================================================== 00:23:31.583 [2024-12-15T04:23:45.270Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:31.583 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 352177 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # chmod 0666 /tmp/tmp.eXxkOEzbcm 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@172 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eXxkOEzbcm 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eXxkOEzbcm 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.eXxkOEzbcm 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.eXxkOEzbcm 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=353966 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 353966 /var/tmp/bdevperf.sock 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 353966 ']' 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.584 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.584 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:31.842 [2024-12-15 05:23:45.283337] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:31.842 [2024-12-15 05:23:45.283382] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353966 ] 00:23:31.842 [2024-12-15 05:23:45.347298] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:31.842 [2024-12-15 05:23:45.368416] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:31.842 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.842 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:31.842 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eXxkOEzbcm 00:23:32.100 [2024-12-15 05:23:45.626770] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.eXxkOEzbcm': 0100666 00:23:32.100 [2024-12-15 05:23:45.626793] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:32.100 request: 00:23:32.100 { 00:23:32.100 "name": "key0", 00:23:32.100 "path": "/tmp/tmp.eXxkOEzbcm", 00:23:32.100 "method": "keyring_file_add_key", 00:23:32.100 "req_id": 1 00:23:32.100 } 00:23:32.100 Got JSON-RPC error response 00:23:32.100 response: 00:23:32.100 { 00:23:32.100 "code": -1, 00:23:32.100 "message": "Operation not permitted" 00:23:32.100 } 00:23:32.100 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:32.359 [2024-12-15 05:23:45.831388] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:32.359 [2024-12-15 05:23:45.831420] bdev_nvme.c:6754:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:23:32.359 request: 00:23:32.359 { 00:23:32.359 "name": "TLSTEST", 00:23:32.359 "trtype": "tcp", 00:23:32.359 "traddr": "10.0.0.2", 00:23:32.359 "adrfam": "ipv4", 00:23:32.359 "trsvcid": "4420", 00:23:32.359 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:32.359 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:32.359 "prchk_reftag": false, 00:23:32.359 "prchk_guard": false, 00:23:32.359 "hdgst": false, 00:23:32.359 "ddgst": false, 00:23:32.359 "psk": "key0", 00:23:32.359 "allow_unrecognized_csi": false, 00:23:32.359 "method": "bdev_nvme_attach_controller", 00:23:32.359 "req_id": 1 00:23:32.359 } 00:23:32.359 Got JSON-RPC error response 00:23:32.359 response: 00:23:32.359 { 00:23:32.359 "code": -126, 00:23:32.359 "message": "Required key not available" 00:23:32.359 } 00:23:32.359 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 353966 00:23:32.359 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 353966 ']' 00:23:32.359 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 353966 00:23:32.359 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.359 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.359 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 353966 00:23:32.359 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:32.359 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:32.359 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 353966' 00:23:32.359 killing process with pid 353966 00:23:32.359 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 353966 00:23:32.359 Received shutdown signal, test time was about 10.000000 seconds 00:23:32.359 00:23:32.359 Latency(us) 00:23:32.359 [2024-12-15T04:23:46.046Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:32.359 [2024-12-15T04:23:46.046Z] =================================================================================================================== 00:23:32.359 [2024-12-15T04:23:46.046Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:32.359 05:23:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 353966 00:23:32.618 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:23:32.618 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:32.618 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:32.618 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # killprocess 351923 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 351923 ']' 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 351923 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 351923 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 351923' 00:23:32.619 killing process with pid 351923 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 351923 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 351923 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # nvmfappstart -m 0x2 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=354202 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 354202 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354202 ']' 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:32.619 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:32.619 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.878 [2024-12-15 05:23:46.320423] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:32.878 [2024-12-15 05:23:46.320467] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:32.878 [2024-12-15 05:23:46.385611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:32.878 [2024-12-15 05:23:46.406068] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:32.878 [2024-12-15 05:23:46.406103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:32.878 [2024-12-15 05:23:46.406110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:32.878 [2024-12-15 05:23:46.406116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:32.878 [2024-12-15 05:23:46.406121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:32.878 [2024-12-15 05:23:46.406599] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@178 -- # NOT setup_nvmf_tgt /tmp/tmp.eXxkOEzbcm 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.eXxkOEzbcm 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.eXxkOEzbcm 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.eXxkOEzbcm 00:23:32.878 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:33.137 [2024-12-15 05:23:46.708726] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:33.137 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:33.395 05:23:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:33.395 [2024-12-15 05:23:47.077694] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:33.396 [2024-12-15 05:23:47.077884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:33.654 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:33.654 malloc0 00:23:33.654 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:33.913 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.eXxkOEzbcm 00:23:34.172 [2024-12-15 05:23:47.607095] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.eXxkOEzbcm': 0100666 00:23:34.172 [2024-12-15 05:23:47.607122] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:23:34.172 request: 00:23:34.172 { 00:23:34.172 "name": "key0", 00:23:34.172 "path": "/tmp/tmp.eXxkOEzbcm", 00:23:34.172 "method": "keyring_file_add_key", 00:23:34.172 "req_id": 1 00:23:34.172 } 00:23:34.172 Got JSON-RPC error response 00:23:34.172 response: 00:23:34.172 { 00:23:34.172 "code": -1, 00:23:34.172 "message": "Operation not permitted" 00:23:34.172 } 00:23:34.172 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:34.172 [2024-12-15 05:23:47.795599] tcp.c:3777:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:23:34.172 [2024-12-15 05:23:47.795630] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:23:34.172 request: 00:23:34.172 { 00:23:34.172 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:34.172 "host": "nqn.2016-06.io.spdk:host1", 00:23:34.172 "psk": "key0", 00:23:34.172 "method": "nvmf_subsystem_add_host", 00:23:34.172 "req_id": 1 00:23:34.172 } 00:23:34.172 Got JSON-RPC error response 00:23:34.172 response: 00:23:34.172 { 00:23:34.172 "code": -32603, 00:23:34.172 "message": "Internal error" 00:23:34.172 } 00:23:34.172 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:23:34.172 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:23:34.172 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:23:34.172 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:23:34.172 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # killprocess 354202 00:23:34.172 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354202 ']' 00:23:34.172 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354202 00:23:34.172 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:34.172 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:34.172 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354202 00:23:34.432 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:34.432 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:34.432 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354202' 00:23:34.432 killing process with pid 354202 00:23:34.432 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354202 00:23:34.432 05:23:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354202 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@182 -- # chmod 0600 /tmp/tmp.eXxkOEzbcm 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # nvmfappstart -m 0x2 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=354466 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 354466 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354466 ']' 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:34.432 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:34.432 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.432 [2024-12-15 05:23:48.085344] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:34.432 [2024-12-15 05:23:48.085387] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:34.691 [2024-12-15 05:23:48.158674] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:34.691 [2024-12-15 05:23:48.179304] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:34.691 [2024-12-15 05:23:48.179340] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:34.691 [2024-12-15 05:23:48.179347] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:34.691 [2024-12-15 05:23:48.179354] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:34.691 [2024-12-15 05:23:48.179359] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:34.691 [2024-12-15 05:23:48.179813] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:34.691 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:34.691 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:34.691 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:34.691 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:34.691 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:34.691 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:34.691 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # setup_nvmf_tgt /tmp/tmp.eXxkOEzbcm 00:23:34.691 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.eXxkOEzbcm 00:23:34.691 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:34.950 [2024-12-15 05:23:48.474091] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:34.950 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:35.209 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:35.209 [2024-12-15 05:23:48.835021] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:35.209 [2024-12-15 05:23:48.835229] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:35.209 05:23:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:35.468 malloc0 00:23:35.468 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:35.727 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.eXxkOEzbcm 00:23:35.727 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:35.986 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # bdevperf_pid=354710 00:23:35.986 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:23:35.986 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:35.986 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # waitforlisten 354710 /var/tmp/bdevperf.sock 00:23:35.986 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354710 ']' 00:23:35.986 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:35.986 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:35.986 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:35.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:35.986 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:35.986 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:35.986 [2024-12-15 05:23:49.608824] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:35.986 [2024-12-15 05:23:49.608870] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid354710 ] 00:23:36.245 [2024-12-15 05:23:49.683361] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:36.245 [2024-12-15 05:23:49.705149] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:36.245 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:36.245 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:36.245 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eXxkOEzbcm 00:23:36.506 05:23:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:23:36.506 [2024-12-15 05:23:50.176020] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:36.765 TLSTESTn1 00:23:36.765 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:23:37.024 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@198 -- # tgtconf='{ 00:23:37.024 "subsystems": [ 00:23:37.024 { 00:23:37.024 "subsystem": "keyring", 00:23:37.024 "config": [ 00:23:37.024 { 00:23:37.024 "method": "keyring_file_add_key", 00:23:37.024 "params": { 00:23:37.024 "name": "key0", 00:23:37.024 "path": "/tmp/tmp.eXxkOEzbcm" 00:23:37.024 } 00:23:37.024 } 00:23:37.024 ] 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "subsystem": "iobuf", 00:23:37.024 "config": [ 00:23:37.024 { 00:23:37.024 "method": "iobuf_set_options", 00:23:37.024 "params": { 00:23:37.024 "small_pool_count": 8192, 00:23:37.024 "large_pool_count": 1024, 00:23:37.024 "small_bufsize": 8192, 00:23:37.024 "large_bufsize": 135168, 00:23:37.024 "enable_numa": false 00:23:37.024 } 00:23:37.024 } 00:23:37.024 ] 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "subsystem": "sock", 00:23:37.024 "config": [ 00:23:37.024 { 00:23:37.024 "method": "sock_set_default_impl", 00:23:37.024 "params": { 00:23:37.024 "impl_name": "posix" 00:23:37.024 } 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "method": "sock_impl_set_options", 00:23:37.024 "params": { 00:23:37.024 "impl_name": "ssl", 00:23:37.024 "recv_buf_size": 4096, 00:23:37.024 "send_buf_size": 4096, 00:23:37.024 "enable_recv_pipe": true, 00:23:37.024 "enable_quickack": false, 00:23:37.024 "enable_placement_id": 0, 00:23:37.024 "enable_zerocopy_send_server": true, 00:23:37.024 "enable_zerocopy_send_client": false, 00:23:37.024 "zerocopy_threshold": 0, 00:23:37.024 "tls_version": 0, 00:23:37.024 "enable_ktls": false 00:23:37.024 } 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "method": "sock_impl_set_options", 00:23:37.024 "params": { 00:23:37.024 "impl_name": "posix", 00:23:37.024 "recv_buf_size": 2097152, 00:23:37.024 "send_buf_size": 2097152, 00:23:37.024 "enable_recv_pipe": true, 00:23:37.024 "enable_quickack": false, 00:23:37.024 "enable_placement_id": 0, 00:23:37.024 "enable_zerocopy_send_server": true, 00:23:37.024 "enable_zerocopy_send_client": false, 00:23:37.024 "zerocopy_threshold": 0, 00:23:37.024 "tls_version": 0, 00:23:37.024 "enable_ktls": false 00:23:37.024 } 00:23:37.024 } 00:23:37.024 ] 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "subsystem": "vmd", 00:23:37.024 "config": [] 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "subsystem": "accel", 00:23:37.024 "config": [ 00:23:37.024 { 00:23:37.024 "method": "accel_set_options", 00:23:37.024 "params": { 00:23:37.024 "small_cache_size": 128, 00:23:37.024 "large_cache_size": 16, 00:23:37.024 "task_count": 2048, 00:23:37.024 "sequence_count": 2048, 00:23:37.024 "buf_count": 2048 00:23:37.024 } 00:23:37.024 } 00:23:37.024 ] 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "subsystem": "bdev", 00:23:37.024 "config": [ 00:23:37.024 { 00:23:37.024 "method": "bdev_set_options", 00:23:37.024 "params": { 00:23:37.024 "bdev_io_pool_size": 65535, 00:23:37.024 "bdev_io_cache_size": 256, 00:23:37.024 "bdev_auto_examine": true, 00:23:37.024 "iobuf_small_cache_size": 128, 00:23:37.024 "iobuf_large_cache_size": 16 00:23:37.024 } 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "method": "bdev_raid_set_options", 00:23:37.024 "params": { 00:23:37.024 "process_window_size_kb": 1024, 00:23:37.024 "process_max_bandwidth_mb_sec": 0 00:23:37.024 } 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "method": "bdev_iscsi_set_options", 00:23:37.024 "params": { 00:23:37.024 "timeout_sec": 30 00:23:37.024 } 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "method": "bdev_nvme_set_options", 00:23:37.024 "params": { 00:23:37.024 "action_on_timeout": "none", 00:23:37.024 "timeout_us": 0, 00:23:37.024 "timeout_admin_us": 0, 00:23:37.024 "keep_alive_timeout_ms": 10000, 00:23:37.024 "arbitration_burst": 0, 00:23:37.024 "low_priority_weight": 0, 00:23:37.024 "medium_priority_weight": 0, 00:23:37.024 "high_priority_weight": 0, 00:23:37.024 "nvme_adminq_poll_period_us": 10000, 00:23:37.024 "nvme_ioq_poll_period_us": 0, 00:23:37.024 "io_queue_requests": 0, 00:23:37.024 "delay_cmd_submit": true, 00:23:37.024 "transport_retry_count": 4, 00:23:37.024 "bdev_retry_count": 3, 00:23:37.024 "transport_ack_timeout": 0, 00:23:37.024 "ctrlr_loss_timeout_sec": 0, 00:23:37.024 "reconnect_delay_sec": 0, 00:23:37.024 "fast_io_fail_timeout_sec": 0, 00:23:37.024 "disable_auto_failback": false, 00:23:37.024 "generate_uuids": false, 00:23:37.024 "transport_tos": 0, 00:23:37.024 "nvme_error_stat": false, 00:23:37.024 "rdma_srq_size": 0, 00:23:37.024 "io_path_stat": false, 00:23:37.024 "allow_accel_sequence": false, 00:23:37.024 "rdma_max_cq_size": 0, 00:23:37.024 "rdma_cm_event_timeout_ms": 0, 00:23:37.024 "dhchap_digests": [ 00:23:37.024 "sha256", 00:23:37.024 "sha384", 00:23:37.024 "sha512" 00:23:37.024 ], 00:23:37.024 "dhchap_dhgroups": [ 00:23:37.024 "null", 00:23:37.024 "ffdhe2048", 00:23:37.024 "ffdhe3072", 00:23:37.024 "ffdhe4096", 00:23:37.024 "ffdhe6144", 00:23:37.024 "ffdhe8192" 00:23:37.024 ], 00:23:37.024 "rdma_umr_per_io": false 00:23:37.024 } 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "method": "bdev_nvme_set_hotplug", 00:23:37.024 "params": { 00:23:37.024 "period_us": 100000, 00:23:37.024 "enable": false 00:23:37.024 } 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "method": "bdev_malloc_create", 00:23:37.024 "params": { 00:23:37.024 "name": "malloc0", 00:23:37.024 "num_blocks": 8192, 00:23:37.024 "block_size": 4096, 00:23:37.024 "physical_block_size": 4096, 00:23:37.024 "uuid": "361cefc9-d3d3-4001-9178-853ab874e832", 00:23:37.024 "optimal_io_boundary": 0, 00:23:37.024 "md_size": 0, 00:23:37.024 "dif_type": 0, 00:23:37.024 "dif_is_head_of_md": false, 00:23:37.024 "dif_pi_format": 0 00:23:37.024 } 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "method": "bdev_wait_for_examine" 00:23:37.024 } 00:23:37.024 ] 00:23:37.024 }, 00:23:37.024 { 00:23:37.024 "subsystem": "nbd", 00:23:37.024 "config": [] 00:23:37.024 }, 00:23:37.025 { 00:23:37.025 "subsystem": "scheduler", 00:23:37.025 "config": [ 00:23:37.025 { 00:23:37.025 "method": "framework_set_scheduler", 00:23:37.025 "params": { 00:23:37.025 "name": "static" 00:23:37.025 } 00:23:37.025 } 00:23:37.025 ] 00:23:37.025 }, 00:23:37.025 { 00:23:37.025 "subsystem": "nvmf", 00:23:37.025 "config": [ 00:23:37.025 { 00:23:37.025 "method": "nvmf_set_config", 00:23:37.025 "params": { 00:23:37.025 "discovery_filter": "match_any", 00:23:37.025 "admin_cmd_passthru": { 00:23:37.025 "identify_ctrlr": false 00:23:37.025 }, 00:23:37.025 "dhchap_digests": [ 00:23:37.025 "sha256", 00:23:37.025 "sha384", 00:23:37.025 "sha512" 00:23:37.025 ], 00:23:37.025 "dhchap_dhgroups": [ 00:23:37.025 "null", 00:23:37.025 "ffdhe2048", 00:23:37.025 "ffdhe3072", 00:23:37.025 "ffdhe4096", 00:23:37.025 "ffdhe6144", 00:23:37.025 "ffdhe8192" 00:23:37.025 ] 00:23:37.025 } 00:23:37.025 }, 00:23:37.025 { 00:23:37.025 "method": "nvmf_set_max_subsystems", 00:23:37.025 "params": { 00:23:37.025 "max_subsystems": 1024 00:23:37.025 } 00:23:37.025 }, 00:23:37.025 { 00:23:37.025 "method": "nvmf_set_crdt", 00:23:37.025 "params": { 00:23:37.025 "crdt1": 0, 00:23:37.025 "crdt2": 0, 00:23:37.025 "crdt3": 0 00:23:37.025 } 00:23:37.025 }, 00:23:37.025 { 00:23:37.025 "method": "nvmf_create_transport", 00:23:37.025 "params": { 00:23:37.025 "trtype": "TCP", 00:23:37.025 "max_queue_depth": 128, 00:23:37.025 "max_io_qpairs_per_ctrlr": 127, 00:23:37.025 "in_capsule_data_size": 4096, 00:23:37.025 "max_io_size": 131072, 00:23:37.025 "io_unit_size": 131072, 00:23:37.025 "max_aq_depth": 128, 00:23:37.025 "num_shared_buffers": 511, 00:23:37.025 "buf_cache_size": 4294967295, 00:23:37.025 "dif_insert_or_strip": false, 00:23:37.025 "zcopy": false, 00:23:37.025 "c2h_success": false, 00:23:37.025 "sock_priority": 0, 00:23:37.025 "abort_timeout_sec": 1, 00:23:37.025 "ack_timeout": 0, 00:23:37.025 "data_wr_pool_size": 0 00:23:37.025 } 00:23:37.025 }, 00:23:37.025 { 00:23:37.025 "method": "nvmf_create_subsystem", 00:23:37.025 "params": { 00:23:37.025 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.025 "allow_any_host": false, 00:23:37.025 "serial_number": "SPDK00000000000001", 00:23:37.025 "model_number": "SPDK bdev Controller", 00:23:37.025 "max_namespaces": 10, 00:23:37.025 "min_cntlid": 1, 00:23:37.025 "max_cntlid": 65519, 00:23:37.025 "ana_reporting": false 00:23:37.025 } 00:23:37.025 }, 00:23:37.025 { 00:23:37.025 "method": "nvmf_subsystem_add_host", 00:23:37.025 "params": { 00:23:37.025 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.025 "host": "nqn.2016-06.io.spdk:host1", 00:23:37.025 "psk": "key0" 00:23:37.025 } 00:23:37.025 }, 00:23:37.025 { 00:23:37.025 "method": "nvmf_subsystem_add_ns", 00:23:37.025 "params": { 00:23:37.025 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.025 "namespace": { 00:23:37.025 "nsid": 1, 00:23:37.025 "bdev_name": "malloc0", 00:23:37.025 "nguid": "361CEFC9D3D340019178853AB874E832", 00:23:37.025 "uuid": "361cefc9-d3d3-4001-9178-853ab874e832", 00:23:37.025 "no_auto_visible": false 00:23:37.025 } 00:23:37.025 } 00:23:37.025 }, 00:23:37.025 { 00:23:37.025 "method": "nvmf_subsystem_add_listener", 00:23:37.025 "params": { 00:23:37.025 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.025 "listen_address": { 00:23:37.025 "trtype": "TCP", 00:23:37.025 "adrfam": "IPv4", 00:23:37.025 "traddr": "10.0.0.2", 00:23:37.025 "trsvcid": "4420" 00:23:37.025 }, 00:23:37.025 "secure_channel": true 00:23:37.025 } 00:23:37.025 } 00:23:37.025 ] 00:23:37.025 } 00:23:37.025 ] 00:23:37.025 }' 00:23:37.025 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:37.285 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # bdevperfconf='{ 00:23:37.285 "subsystems": [ 00:23:37.285 { 00:23:37.285 "subsystem": "keyring", 00:23:37.285 "config": [ 00:23:37.285 { 00:23:37.285 "method": "keyring_file_add_key", 00:23:37.285 "params": { 00:23:37.285 "name": "key0", 00:23:37.285 "path": "/tmp/tmp.eXxkOEzbcm" 00:23:37.285 } 00:23:37.285 } 00:23:37.285 ] 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "subsystem": "iobuf", 00:23:37.285 "config": [ 00:23:37.285 { 00:23:37.285 "method": "iobuf_set_options", 00:23:37.285 "params": { 00:23:37.285 "small_pool_count": 8192, 00:23:37.285 "large_pool_count": 1024, 00:23:37.285 "small_bufsize": 8192, 00:23:37.285 "large_bufsize": 135168, 00:23:37.285 "enable_numa": false 00:23:37.285 } 00:23:37.285 } 00:23:37.285 ] 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "subsystem": "sock", 00:23:37.285 "config": [ 00:23:37.285 { 00:23:37.285 "method": "sock_set_default_impl", 00:23:37.285 "params": { 00:23:37.285 "impl_name": "posix" 00:23:37.285 } 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "method": "sock_impl_set_options", 00:23:37.285 "params": { 00:23:37.285 "impl_name": "ssl", 00:23:37.285 "recv_buf_size": 4096, 00:23:37.285 "send_buf_size": 4096, 00:23:37.285 "enable_recv_pipe": true, 00:23:37.285 "enable_quickack": false, 00:23:37.285 "enable_placement_id": 0, 00:23:37.285 "enable_zerocopy_send_server": true, 00:23:37.285 "enable_zerocopy_send_client": false, 00:23:37.285 "zerocopy_threshold": 0, 00:23:37.285 "tls_version": 0, 00:23:37.285 "enable_ktls": false 00:23:37.285 } 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "method": "sock_impl_set_options", 00:23:37.285 "params": { 00:23:37.285 "impl_name": "posix", 00:23:37.285 "recv_buf_size": 2097152, 00:23:37.285 "send_buf_size": 2097152, 00:23:37.285 "enable_recv_pipe": true, 00:23:37.285 "enable_quickack": false, 00:23:37.285 "enable_placement_id": 0, 00:23:37.285 "enable_zerocopy_send_server": true, 00:23:37.285 "enable_zerocopy_send_client": false, 00:23:37.285 "zerocopy_threshold": 0, 00:23:37.285 "tls_version": 0, 00:23:37.285 "enable_ktls": false 00:23:37.285 } 00:23:37.285 } 00:23:37.285 ] 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "subsystem": "vmd", 00:23:37.285 "config": [] 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "subsystem": "accel", 00:23:37.285 "config": [ 00:23:37.285 { 00:23:37.285 "method": "accel_set_options", 00:23:37.285 "params": { 00:23:37.285 "small_cache_size": 128, 00:23:37.285 "large_cache_size": 16, 00:23:37.285 "task_count": 2048, 00:23:37.285 "sequence_count": 2048, 00:23:37.285 "buf_count": 2048 00:23:37.285 } 00:23:37.285 } 00:23:37.285 ] 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "subsystem": "bdev", 00:23:37.285 "config": [ 00:23:37.285 { 00:23:37.285 "method": "bdev_set_options", 00:23:37.285 "params": { 00:23:37.285 "bdev_io_pool_size": 65535, 00:23:37.285 "bdev_io_cache_size": 256, 00:23:37.285 "bdev_auto_examine": true, 00:23:37.285 "iobuf_small_cache_size": 128, 00:23:37.285 "iobuf_large_cache_size": 16 00:23:37.285 } 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "method": "bdev_raid_set_options", 00:23:37.285 "params": { 00:23:37.285 "process_window_size_kb": 1024, 00:23:37.285 "process_max_bandwidth_mb_sec": 0 00:23:37.285 } 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "method": "bdev_iscsi_set_options", 00:23:37.285 "params": { 00:23:37.285 "timeout_sec": 30 00:23:37.285 } 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "method": "bdev_nvme_set_options", 00:23:37.285 "params": { 00:23:37.285 "action_on_timeout": "none", 00:23:37.285 "timeout_us": 0, 00:23:37.285 "timeout_admin_us": 0, 00:23:37.285 "keep_alive_timeout_ms": 10000, 00:23:37.285 "arbitration_burst": 0, 00:23:37.285 "low_priority_weight": 0, 00:23:37.285 "medium_priority_weight": 0, 00:23:37.285 "high_priority_weight": 0, 00:23:37.285 "nvme_adminq_poll_period_us": 10000, 00:23:37.285 "nvme_ioq_poll_period_us": 0, 00:23:37.285 "io_queue_requests": 512, 00:23:37.285 "delay_cmd_submit": true, 00:23:37.285 "transport_retry_count": 4, 00:23:37.285 "bdev_retry_count": 3, 00:23:37.285 "transport_ack_timeout": 0, 00:23:37.285 "ctrlr_loss_timeout_sec": 0, 00:23:37.285 "reconnect_delay_sec": 0, 00:23:37.285 "fast_io_fail_timeout_sec": 0, 00:23:37.285 "disable_auto_failback": false, 00:23:37.285 "generate_uuids": false, 00:23:37.285 "transport_tos": 0, 00:23:37.285 "nvme_error_stat": false, 00:23:37.285 "rdma_srq_size": 0, 00:23:37.285 "io_path_stat": false, 00:23:37.285 "allow_accel_sequence": false, 00:23:37.285 "rdma_max_cq_size": 0, 00:23:37.285 "rdma_cm_event_timeout_ms": 0, 00:23:37.285 "dhchap_digests": [ 00:23:37.285 "sha256", 00:23:37.285 "sha384", 00:23:37.285 "sha512" 00:23:37.285 ], 00:23:37.285 "dhchap_dhgroups": [ 00:23:37.285 "null", 00:23:37.285 "ffdhe2048", 00:23:37.285 "ffdhe3072", 00:23:37.285 "ffdhe4096", 00:23:37.285 "ffdhe6144", 00:23:37.285 "ffdhe8192" 00:23:37.285 ], 00:23:37.285 "rdma_umr_per_io": false 00:23:37.285 } 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "method": "bdev_nvme_attach_controller", 00:23:37.285 "params": { 00:23:37.285 "name": "TLSTEST", 00:23:37.285 "trtype": "TCP", 00:23:37.285 "adrfam": "IPv4", 00:23:37.285 "traddr": "10.0.0.2", 00:23:37.285 "trsvcid": "4420", 00:23:37.285 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.285 "prchk_reftag": false, 00:23:37.285 "prchk_guard": false, 00:23:37.285 "ctrlr_loss_timeout_sec": 0, 00:23:37.285 "reconnect_delay_sec": 0, 00:23:37.285 "fast_io_fail_timeout_sec": 0, 00:23:37.285 "psk": "key0", 00:23:37.285 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:37.285 "hdgst": false, 00:23:37.285 "ddgst": false, 00:23:37.285 "multipath": "multipath" 00:23:37.285 } 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "method": "bdev_nvme_set_hotplug", 00:23:37.285 "params": { 00:23:37.285 "period_us": 100000, 00:23:37.285 "enable": false 00:23:37.285 } 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "method": "bdev_wait_for_examine" 00:23:37.285 } 00:23:37.285 ] 00:23:37.285 }, 00:23:37.285 { 00:23:37.285 "subsystem": "nbd", 00:23:37.285 "config": [] 00:23:37.285 } 00:23:37.285 ] 00:23:37.285 }' 00:23:37.285 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # killprocess 354710 00:23:37.285 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354710 ']' 00:23:37.285 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354710 00:23:37.285 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:37.285 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.285 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354710 00:23:37.285 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:37.285 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:37.285 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354710' 00:23:37.285 killing process with pid 354710 00:23:37.285 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354710 00:23:37.285 Received shutdown signal, test time was about 10.000000 seconds 00:23:37.285 00:23:37.285 Latency(us) 00:23:37.285 [2024-12-15T04:23:50.972Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:37.285 [2024-12-15T04:23:50.972Z] =================================================================================================================== 00:23:37.285 [2024-12-15T04:23:50.972Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:23:37.285 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354710 00:23:37.545 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@202 -- # killprocess 354466 00:23:37.545 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354466 ']' 00:23:37.545 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354466 00:23:37.545 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:37.545 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:37.545 05:23:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354466 00:23:37.545 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:37.545 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:37.545 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354466' 00:23:37.545 killing process with pid 354466 00:23:37.545 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354466 00:23:37.545 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354466 00:23:37.545 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:23:37.545 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:37.545 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:37.545 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # echo '{ 00:23:37.545 "subsystems": [ 00:23:37.545 { 00:23:37.545 "subsystem": "keyring", 00:23:37.545 "config": [ 00:23:37.545 { 00:23:37.545 "method": "keyring_file_add_key", 00:23:37.545 "params": { 00:23:37.545 "name": "key0", 00:23:37.545 "path": "/tmp/tmp.eXxkOEzbcm" 00:23:37.545 } 00:23:37.545 } 00:23:37.545 ] 00:23:37.545 }, 00:23:37.545 { 00:23:37.545 "subsystem": "iobuf", 00:23:37.545 "config": [ 00:23:37.545 { 00:23:37.545 "method": "iobuf_set_options", 00:23:37.545 "params": { 00:23:37.545 "small_pool_count": 8192, 00:23:37.545 "large_pool_count": 1024, 00:23:37.545 "small_bufsize": 8192, 00:23:37.545 "large_bufsize": 135168, 00:23:37.545 "enable_numa": false 00:23:37.545 } 00:23:37.545 } 00:23:37.545 ] 00:23:37.545 }, 00:23:37.545 { 00:23:37.545 "subsystem": "sock", 00:23:37.545 "config": [ 00:23:37.545 { 00:23:37.545 "method": "sock_set_default_impl", 00:23:37.545 "params": { 00:23:37.545 "impl_name": "posix" 00:23:37.545 } 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "method": "sock_impl_set_options", 00:23:37.546 "params": { 00:23:37.546 "impl_name": "ssl", 00:23:37.546 "recv_buf_size": 4096, 00:23:37.546 "send_buf_size": 4096, 00:23:37.546 "enable_recv_pipe": true, 00:23:37.546 "enable_quickack": false, 00:23:37.546 "enable_placement_id": 0, 00:23:37.546 "enable_zerocopy_send_server": true, 00:23:37.546 "enable_zerocopy_send_client": false, 00:23:37.546 "zerocopy_threshold": 0, 00:23:37.546 "tls_version": 0, 00:23:37.546 "enable_ktls": false 00:23:37.546 } 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "method": "sock_impl_set_options", 00:23:37.546 "params": { 00:23:37.546 "impl_name": "posix", 00:23:37.546 "recv_buf_size": 2097152, 00:23:37.546 "send_buf_size": 2097152, 00:23:37.546 "enable_recv_pipe": true, 00:23:37.546 "enable_quickack": false, 00:23:37.546 "enable_placement_id": 0, 00:23:37.546 "enable_zerocopy_send_server": true, 00:23:37.546 "enable_zerocopy_send_client": false, 00:23:37.546 "zerocopy_threshold": 0, 00:23:37.546 "tls_version": 0, 00:23:37.546 "enable_ktls": false 00:23:37.546 } 00:23:37.546 } 00:23:37.546 ] 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "subsystem": "vmd", 00:23:37.546 "config": [] 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "subsystem": "accel", 00:23:37.546 "config": [ 00:23:37.546 { 00:23:37.546 "method": "accel_set_options", 00:23:37.546 "params": { 00:23:37.546 "small_cache_size": 128, 00:23:37.546 "large_cache_size": 16, 00:23:37.546 "task_count": 2048, 00:23:37.546 "sequence_count": 2048, 00:23:37.546 "buf_count": 2048 00:23:37.546 } 00:23:37.546 } 00:23:37.546 ] 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "subsystem": "bdev", 00:23:37.546 "config": [ 00:23:37.546 { 00:23:37.546 "method": "bdev_set_options", 00:23:37.546 "params": { 00:23:37.546 "bdev_io_pool_size": 65535, 00:23:37.546 "bdev_io_cache_size": 256, 00:23:37.546 "bdev_auto_examine": true, 00:23:37.546 "iobuf_small_cache_size": 128, 00:23:37.546 "iobuf_large_cache_size": 16 00:23:37.546 } 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "method": "bdev_raid_set_options", 00:23:37.546 "params": { 00:23:37.546 "process_window_size_kb": 1024, 00:23:37.546 "process_max_bandwidth_mb_sec": 0 00:23:37.546 } 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "method": "bdev_iscsi_set_options", 00:23:37.546 "params": { 00:23:37.546 "timeout_sec": 30 00:23:37.546 } 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "method": "bdev_nvme_set_options", 00:23:37.546 "params": { 00:23:37.546 "action_on_timeout": "none", 00:23:37.546 "timeout_us": 0, 00:23:37.546 "timeout_admin_us": 0, 00:23:37.546 "keep_alive_timeout_ms": 10000, 00:23:37.546 "arbitration_burst": 0, 00:23:37.546 "low_priority_weight": 0, 00:23:37.546 "medium_priority_weight": 0, 00:23:37.546 "high_priority_weight": 0, 00:23:37.546 "nvme_adminq_poll_period_us": 10000, 00:23:37.546 "nvme_ioq_poll_period_us": 0, 00:23:37.546 "io_queue_requests": 0, 00:23:37.546 "delay_cmd_submit": true, 00:23:37.546 "transport_retry_count": 4, 00:23:37.546 "bdev_retry_count": 3, 00:23:37.546 "transport_ack_timeout": 0, 00:23:37.546 "ctrlr_loss_timeout_sec": 0, 00:23:37.546 "reconnect_delay_sec": 0, 00:23:37.546 "fast_io_fail_timeout_sec": 0, 00:23:37.546 "disable_auto_failback": false, 00:23:37.546 "generate_uuids": false, 00:23:37.546 "transport_tos": 0, 00:23:37.546 "nvme_error_stat": false, 00:23:37.546 "rdma_srq_size": 0, 00:23:37.546 "io_path_stat": false, 00:23:37.546 "allow_accel_sequence": false, 00:23:37.546 "rdma_max_cq_size": 0, 00:23:37.546 "rdma_cm_event_timeout_ms": 0, 00:23:37.546 "dhchap_digests": [ 00:23:37.546 "sha256", 00:23:37.546 "sha384", 00:23:37.546 "sha512" 00:23:37.546 ], 00:23:37.546 "dhchap_dhgroups": [ 00:23:37.546 "null", 00:23:37.546 "ffdhe2048", 00:23:37.546 "ffdhe3072", 00:23:37.546 "ffdhe4096", 00:23:37.546 "ffdhe6144", 00:23:37.546 "ffdhe8192" 00:23:37.546 ], 00:23:37.546 "rdma_umr_per_io": false 00:23:37.546 } 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "method": "bdev_nvme_set_hotplug", 00:23:37.546 "params": { 00:23:37.546 "period_us": 100000, 00:23:37.546 "enable": false 00:23:37.546 } 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "method": "bdev_malloc_create", 00:23:37.546 "params": { 00:23:37.546 "name": "malloc0", 00:23:37.546 "num_blocks": 8192, 00:23:37.546 "block_size": 4096, 00:23:37.546 "physical_block_size": 4096, 00:23:37.546 "uuid": "361cefc9-d3d3-4001-9178-853ab874e832", 00:23:37.546 "optimal_io_boundary": 0, 00:23:37.546 "md_size": 0, 00:23:37.546 "dif_type": 0, 00:23:37.546 "dif_is_head_of_md": false, 00:23:37.546 "dif_pi_format": 0 00:23:37.546 } 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "method": "bdev_wait_for_examine" 00:23:37.546 } 00:23:37.546 ] 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "subsystem": "nbd", 00:23:37.546 "config": [] 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "subsystem": "scheduler", 00:23:37.546 "config": [ 00:23:37.546 { 00:23:37.546 "method": "framework_set_scheduler", 00:23:37.546 "params": { 00:23:37.546 "name": "static" 00:23:37.546 } 00:23:37.546 } 00:23:37.546 ] 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "subsystem": "nvmf", 00:23:37.546 "config": [ 00:23:37.546 { 00:23:37.546 "method": "nvmf_set_config", 00:23:37.546 "params": { 00:23:37.546 "discovery_filter": "match_any", 00:23:37.546 "admin_cmd_passthru": { 00:23:37.546 "identify_ctrlr": false 00:23:37.546 }, 00:23:37.546 "dhchap_digests": [ 00:23:37.546 "sha256", 00:23:37.546 "sha384", 00:23:37.546 "sha512" 00:23:37.546 ], 00:23:37.546 "dhchap_dhgroups": [ 00:23:37.546 "null", 00:23:37.546 "ffdhe2048", 00:23:37.546 "ffdhe3072", 00:23:37.546 "ffdhe4096", 00:23:37.546 "ffdhe6144", 00:23:37.546 "ffdhe8192" 00:23:37.546 ] 00:23:37.546 } 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "method": "nvmf_set_max_subsystems", 00:23:37.546 "params": { 00:23:37.546 "max_subsystems": 1024 00:23:37.546 } 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "method": "nvmf_set_crdt", 00:23:37.546 "params": { 00:23:37.546 "crdt1": 0, 00:23:37.546 "crdt2": 0, 00:23:37.546 "crdt3": 0 00:23:37.546 } 00:23:37.546 }, 00:23:37.546 { 00:23:37.546 "method": "nvmf_create_transport", 00:23:37.546 "params": { 00:23:37.546 "trtype": "TCP", 00:23:37.546 "max_queue_depth": 128, 00:23:37.547 "max_io_qpairs_per_ctrlr": 127, 00:23:37.547 "in_capsule_data_size": 4096, 00:23:37.547 "max_io_size": 131072, 00:23:37.547 "io_unit_size": 131072, 00:23:37.547 "max_aq_depth": 128, 00:23:37.547 "num_shared_buffers": 511, 00:23:37.547 "buf_cache_size": 4294967295, 00:23:37.547 "dif_insert_or_strip": false, 00:23:37.547 "zcopy": false, 00:23:37.547 "c2h_success": false, 00:23:37.547 "sock_priority": 0, 00:23:37.547 "abort_timeout_sec": 1, 00:23:37.547 "ack_timeout": 0, 00:23:37.547 "data_wr_pool_size": 0 00:23:37.547 } 00:23:37.547 }, 00:23:37.547 { 00:23:37.547 "method": "nvmf_create_subsystem", 00:23:37.547 "params": { 00:23:37.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.547 "allow_any_host": false, 00:23:37.547 "serial_number": "SPDK00000000000001", 00:23:37.547 "model_number": "SPDK bdev Controller", 00:23:37.547 "max_namespaces": 10, 00:23:37.547 "min_cntlid": 1, 00:23:37.547 "max_cntlid": 65519, 00:23:37.547 "ana_reporting": false 00:23:37.547 } 00:23:37.547 }, 00:23:37.547 { 00:23:37.547 "method": "nvmf_subsystem_add_host", 00:23:37.547 "params": { 00:23:37.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.547 "host": "nqn.2016-06.io.spdk:host1", 00:23:37.547 "psk": "key0" 00:23:37.547 } 00:23:37.547 }, 00:23:37.547 { 00:23:37.547 "method": "nvmf_subsystem_add_ns", 00:23:37.547 "params": { 00:23:37.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.547 "namespace": { 00:23:37.547 "nsid": 1, 00:23:37.547 "bdev_name": "malloc0", 00:23:37.547 "nguid": "361CEFC9D3D340019178853AB874E832", 00:23:37.547 "uuid": "361cefc9-d3d3-4001-9178-853ab874e832", 00:23:37.547 "no_auto_visible": false 00:23:37.547 } 00:23:37.547 } 00:23:37.547 }, 00:23:37.547 { 00:23:37.547 "method": "nvmf_subsystem_add_listener", 00:23:37.547 "params": { 00:23:37.547 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:37.547 "listen_address": { 00:23:37.547 "trtype": "TCP", 00:23:37.547 "adrfam": "IPv4", 00:23:37.547 "traddr": "10.0.0.2", 00:23:37.547 "trsvcid": "4420" 00:23:37.547 }, 00:23:37.547 "secure_channel": true 00:23:37.547 } 00:23:37.547 } 00:23:37.547 ] 00:23:37.547 } 00:23:37.547 ] 00:23:37.547 }' 00:23:37.547 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.547 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=354958 00:23:37.547 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 354958 00:23:37.547 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:23:37.547 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 354958 ']' 00:23:37.547 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:37.547 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:37.547 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:37.547 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:37.547 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:37.547 05:23:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:37.806 [2024-12-15 05:23:51.270892] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:37.806 [2024-12-15 05:23:51.270935] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:37.806 [2024-12-15 05:23:51.346938] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:37.806 [2024-12-15 05:23:51.367680] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:37.806 [2024-12-15 05:23:51.367714] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:37.806 [2024-12-15 05:23:51.367721] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:37.806 [2024-12-15 05:23:51.367727] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:37.806 [2024-12-15 05:23:51.367732] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:37.806 [2024-12-15 05:23:51.368245] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:38.065 [2024-12-15 05:23:51.576794] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:38.065 [2024-12-15 05:23:51.608814] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:38.065 [2024-12-15 05:23:51.609012] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@209 -- # bdevperf_pid=355195 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # waitforlisten 355195 /var/tmp/bdevperf.sock 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 355195 ']' 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:38.633 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@206 -- # echo '{ 00:23:38.633 "subsystems": [ 00:23:38.633 { 00:23:38.633 "subsystem": "keyring", 00:23:38.633 "config": [ 00:23:38.633 { 00:23:38.633 "method": "keyring_file_add_key", 00:23:38.633 "params": { 00:23:38.633 "name": "key0", 00:23:38.633 "path": "/tmp/tmp.eXxkOEzbcm" 00:23:38.633 } 00:23:38.633 } 00:23:38.633 ] 00:23:38.633 }, 00:23:38.633 { 00:23:38.633 "subsystem": "iobuf", 00:23:38.633 "config": [ 00:23:38.633 { 00:23:38.633 "method": "iobuf_set_options", 00:23:38.633 "params": { 00:23:38.633 "small_pool_count": 8192, 00:23:38.633 "large_pool_count": 1024, 00:23:38.633 "small_bufsize": 8192, 00:23:38.633 "large_bufsize": 135168, 00:23:38.633 "enable_numa": false 00:23:38.633 } 00:23:38.633 } 00:23:38.633 ] 00:23:38.633 }, 00:23:38.634 { 00:23:38.634 "subsystem": "sock", 00:23:38.634 "config": [ 00:23:38.634 { 00:23:38.634 "method": "sock_set_default_impl", 00:23:38.634 "params": { 00:23:38.634 "impl_name": "posix" 00:23:38.634 } 00:23:38.634 }, 00:23:38.634 { 00:23:38.634 "method": "sock_impl_set_options", 00:23:38.634 "params": { 00:23:38.634 "impl_name": "ssl", 00:23:38.634 "recv_buf_size": 4096, 00:23:38.634 "send_buf_size": 4096, 00:23:38.634 "enable_recv_pipe": true, 00:23:38.634 "enable_quickack": false, 00:23:38.634 "enable_placement_id": 0, 00:23:38.634 "enable_zerocopy_send_server": true, 00:23:38.634 "enable_zerocopy_send_client": false, 00:23:38.634 "zerocopy_threshold": 0, 00:23:38.634 "tls_version": 0, 00:23:38.634 "enable_ktls": false 00:23:38.634 } 00:23:38.634 }, 00:23:38.634 { 00:23:38.634 "method": "sock_impl_set_options", 00:23:38.634 "params": { 00:23:38.634 "impl_name": "posix", 00:23:38.634 "recv_buf_size": 2097152, 00:23:38.634 "send_buf_size": 2097152, 00:23:38.634 "enable_recv_pipe": true, 00:23:38.634 "enable_quickack": false, 00:23:38.634 "enable_placement_id": 0, 00:23:38.634 "enable_zerocopy_send_server": true, 00:23:38.634 "enable_zerocopy_send_client": false, 00:23:38.634 "zerocopy_threshold": 0, 00:23:38.634 "tls_version": 0, 00:23:38.634 "enable_ktls": false 00:23:38.634 } 00:23:38.634 } 00:23:38.634 ] 00:23:38.634 }, 00:23:38.634 { 00:23:38.634 "subsystem": "vmd", 00:23:38.634 "config": [] 00:23:38.634 }, 00:23:38.634 { 00:23:38.634 "subsystem": "accel", 00:23:38.634 "config": [ 00:23:38.634 { 00:23:38.634 "method": "accel_set_options", 00:23:38.634 "params": { 00:23:38.634 "small_cache_size": 128, 00:23:38.634 "large_cache_size": 16, 00:23:38.634 "task_count": 2048, 00:23:38.634 "sequence_count": 2048, 00:23:38.634 "buf_count": 2048 00:23:38.634 } 00:23:38.634 } 00:23:38.634 ] 00:23:38.634 }, 00:23:38.634 { 00:23:38.634 "subsystem": "bdev", 00:23:38.634 "config": [ 00:23:38.634 { 00:23:38.634 "method": "bdev_set_options", 00:23:38.634 "params": { 00:23:38.634 "bdev_io_pool_size": 65535, 00:23:38.634 "bdev_io_cache_size": 256, 00:23:38.634 "bdev_auto_examine": true, 00:23:38.634 "iobuf_small_cache_size": 128, 00:23:38.634 "iobuf_large_cache_size": 16 00:23:38.634 } 00:23:38.634 }, 00:23:38.634 { 00:23:38.634 "method": "bdev_raid_set_options", 00:23:38.634 "params": { 00:23:38.634 "process_window_size_kb": 1024, 00:23:38.634 "process_max_bandwidth_mb_sec": 0 00:23:38.634 } 00:23:38.634 }, 00:23:38.634 { 00:23:38.634 "method": "bdev_iscsi_set_options", 00:23:38.634 "params": { 00:23:38.634 "timeout_sec": 30 00:23:38.634 } 00:23:38.634 }, 00:23:38.634 { 00:23:38.634 "method": "bdev_nvme_set_options", 00:23:38.634 "params": { 00:23:38.634 "action_on_timeout": "none", 00:23:38.634 "timeout_us": 0, 00:23:38.634 "timeout_admin_us": 0, 00:23:38.634 "keep_alive_timeout_ms": 10000, 00:23:38.634 "arbitration_burst": 0, 00:23:38.634 "low_priority_weight": 0, 00:23:38.634 "medium_priority_weight": 0, 00:23:38.634 "high_priority_weight": 0, 00:23:38.634 "nvme_adminq_poll_period_us": 10000, 00:23:38.634 "nvme_ioq_poll_period_us": 0, 00:23:38.634 "io_queue_requests": 512, 00:23:38.634 "delay_cmd_submit": true, 00:23:38.634 "transport_retry_count": 4, 00:23:38.634 "bdev_retry_count": 3, 00:23:38.634 "transport_ack_timeout": 0, 00:23:38.634 "ctrlr_loss_timeout_sec": 0, 00:23:38.634 "reconnect_delay_sec": 0, 00:23:38.634 "fast_io_fail_timeout_sec": 0, 00:23:38.634 "disable_auto_failback": false, 00:23:38.634 "generate_uuids": false, 00:23:38.634 "transport_tos": 0, 00:23:38.634 "nvme_error_stat": false, 00:23:38.634 "rdma_srq_size": 0, 00:23:38.634 "io_path_stat": false, 00:23:38.634 "allow_accel_sequence": false, 00:23:38.634 "rdma_max_cq_size": 0, 00:23:38.634 "rdma_cm_event_timeout_ms": 0Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:38.634 , 00:23:38.634 "dhchap_digests": [ 00:23:38.634 "sha256", 00:23:38.634 "sha384", 00:23:38.634 "sha512" 00:23:38.634 ], 00:23:38.634 "dhchap_dhgroups": [ 00:23:38.634 "null", 00:23:38.634 "ffdhe2048", 00:23:38.634 "ffdhe3072", 00:23:38.634 "ffdhe4096", 00:23:38.634 "ffdhe6144", 00:23:38.634 "ffdhe8192" 00:23:38.634 ], 00:23:38.634 "rdma_umr_per_io": false 00:23:38.634 } 00:23:38.634 }, 00:23:38.634 { 00:23:38.634 "method": "bdev_nvme_attach_controller", 00:23:38.634 "params": { 00:23:38.634 "name": "TLSTEST", 00:23:38.634 "trtype": "TCP", 00:23:38.634 "adrfam": "IPv4", 00:23:38.634 "traddr": "10.0.0.2", 00:23:38.634 "trsvcid": "4420", 00:23:38.634 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:38.634 "prchk_reftag": false, 00:23:38.634 "prchk_guard": false, 00:23:38.634 "ctrlr_loss_timeout_sec": 0, 00:23:38.634 "reconnect_delay_sec": 0, 00:23:38.634 "fast_io_fail_timeout_sec": 0, 00:23:38.634 "psk": "key0", 00:23:38.634 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:38.634 "hdgst": false, 00:23:38.634 "ddgst": false, 00:23:38.634 "multipath": "multipath" 00:23:38.634 } 00:23:38.634 }, 00:23:38.634 { 00:23:38.634 "method": "bdev_nvme_set_hotplug", 00:23:38.634 "params": { 00:23:38.634 "period_us": 100000, 00:23:38.634 "enable": false 00:23:38.634 } 00:23:38.634 }, 00:23:38.634 { 00:23:38.634 "method": "bdev_wait_for_examine" 00:23:38.634 } 00:23:38.634 ] 00:23:38.634 }, 00:23:38.634 { 00:23:38.634 "subsystem": "nbd", 00:23:38.634 "config": [] 00:23:38.634 } 00:23:38.634 ] 00:23:38.634 }' 00:23:38.634 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:38.634 05:23:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:38.634 [2024-12-15 05:23:52.174853] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:38.634 [2024-12-15 05:23:52.174900] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355195 ] 00:23:38.634 [2024-12-15 05:23:52.250182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:38.634 [2024-12-15 05:23:52.272065] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:23:38.893 [2024-12-15 05:23:52.419855] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:39.461 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:39.461 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:39.461 05:23:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:23:39.461 Running I/O for 10 seconds... 00:23:41.771 5167.00 IOPS, 20.18 MiB/s [2024-12-15T04:23:56.393Z] 5282.50 IOPS, 20.63 MiB/s [2024-12-15T04:23:57.327Z] 5365.67 IOPS, 20.96 MiB/s [2024-12-15T04:23:58.261Z] 5266.75 IOPS, 20.57 MiB/s [2024-12-15T04:23:59.196Z] 5320.80 IOPS, 20.78 MiB/s [2024-12-15T04:24:00.571Z] 5343.33 IOPS, 20.87 MiB/s [2024-12-15T04:24:01.507Z] 5302.71 IOPS, 20.71 MiB/s [2024-12-15T04:24:02.439Z] 5170.25 IOPS, 20.20 MiB/s [2024-12-15T04:24:03.375Z] 5220.00 IOPS, 20.39 MiB/s [2024-12-15T04:24:03.375Z] 5251.70 IOPS, 20.51 MiB/s 00:23:49.688 Latency(us) 00:23:49.688 [2024-12-15T04:24:03.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.688 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:49.688 Verification LBA range: start 0x0 length 0x2000 00:23:49.688 TLSTESTn1 : 10.03 5251.01 20.51 0.00 0.00 24331.26 4618.73 33704.23 00:23:49.688 [2024-12-15T04:24:03.375Z] =================================================================================================================== 00:23:49.688 [2024-12-15T04:24:03.375Z] Total : 5251.01 20.51 0.00 0.00 24331.26 4618.73 33704.23 00:23:49.688 { 00:23:49.688 "results": [ 00:23:49.688 { 00:23:49.688 "job": "TLSTESTn1", 00:23:49.688 "core_mask": "0x4", 00:23:49.688 "workload": "verify", 00:23:49.688 "status": "finished", 00:23:49.688 "verify_range": { 00:23:49.688 "start": 0, 00:23:49.688 "length": 8192 00:23:49.688 }, 00:23:49.688 "queue_depth": 128, 00:23:49.688 "io_size": 4096, 00:23:49.688 "runtime": 10.025497, 00:23:49.688 "iops": 5251.011495988678, 00:23:49.688 "mibps": 20.511763656205773, 00:23:49.688 "io_failed": 0, 00:23:49.688 "io_timeout": 0, 00:23:49.688 "avg_latency_us": 24331.2563639324, 00:23:49.688 "min_latency_us": 4618.727619047619, 00:23:49.688 "max_latency_us": 33704.22857142857 00:23:49.688 } 00:23:49.688 ], 00:23:49.688 "core_count": 1 00:23:49.688 } 00:23:49.688 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:49.688 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # killprocess 355195 00:23:49.688 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 355195 ']' 00:23:49.688 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 355195 00:23:49.688 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:49.688 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.688 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 355195 00:23:49.688 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:49.688 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:49.688 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 355195' 00:23:49.688 killing process with pid 355195 00:23:49.688 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 355195 00:23:49.688 Received shutdown signal, test time was about 10.000000 seconds 00:23:49.688 00:23:49.688 Latency(us) 00:23:49.688 [2024-12-15T04:24:03.375Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:49.688 [2024-12-15T04:24:03.376Z] =================================================================================================================== 00:23:49.689 [2024-12-15T04:24:03.376Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:49.689 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 355195 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # killprocess 354958 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 354958 ']' 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 354958 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 354958 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 354958' 00:23:49.948 killing process with pid 354958 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 354958 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 354958 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # nvmfappstart 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=357115 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 357115 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357115 ']' 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:49.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:49.948 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.207 [2024-12-15 05:24:03.669052] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:50.207 [2024-12-15 05:24:03.669097] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:50.207 [2024-12-15 05:24:03.742063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:50.207 [2024-12-15 05:24:03.762458] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:50.207 [2024-12-15 05:24:03.762490] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:50.207 [2024-12-15 05:24:03.762499] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:50.207 [2024-12-15 05:24:03.762505] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:50.207 [2024-12-15 05:24:03.762510] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:50.207 [2024-12-15 05:24:03.763007] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:50.207 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:50.207 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:50.207 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:50.207 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:50.207 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:50.466 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:50.466 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # setup_nvmf_tgt /tmp/tmp.eXxkOEzbcm 00:23:50.466 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.eXxkOEzbcm 00:23:50.466 05:24:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:23:50.466 [2024-12-15 05:24:04.069846] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:50.466 05:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:23:50.725 05:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:23:50.984 [2024-12-15 05:24:04.466839] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:50.984 [2024-12-15 05:24:04.467032] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:50.984 05:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:23:50.984 malloc0 00:23:51.243 05:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:23:51.243 05:24:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.eXxkOEzbcm 00:23:51.502 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:23:51.760 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:51.760 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # bdevperf_pid=357361 00:23:51.760 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@226 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:51.760 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # waitforlisten 357361 /var/tmp/bdevperf.sock 00:23:51.760 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 357361 ']' 00:23:51.760 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:51.760 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:51.760 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:51.760 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:51.760 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:51.760 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:51.760 [2024-12-15 05:24:05.304439] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:51.760 [2024-12-15 05:24:05.304490] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357361 ] 00:23:51.760 [2024-12-15 05:24:05.379472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:51.760 [2024-12-15 05:24:05.401415] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:52.019 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:52.019 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:52.019 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eXxkOEzbcm 00:23:52.019 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@230 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:52.278 [2024-12-15 05:24:05.868769] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:52.278 nvme0n1 00:23:52.536 05:24:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:52.536 Running I/O for 1 seconds... 00:23:53.475 4116.00 IOPS, 16.08 MiB/s 00:23:53.475 Latency(us) 00:23:53.475 [2024-12-15T04:24:07.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.475 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:53.475 Verification LBA range: start 0x0 length 0x2000 00:23:53.475 nvme0n1 : 1.02 4176.31 16.31 0.00 0.00 30423.49 4681.14 39196.77 00:23:53.475 [2024-12-15T04:24:07.162Z] =================================================================================================================== 00:23:53.475 [2024-12-15T04:24:07.162Z] Total : 4176.31 16.31 0.00 0.00 30423.49 4681.14 39196.77 00:23:53.475 { 00:23:53.475 "results": [ 00:23:53.475 { 00:23:53.475 "job": "nvme0n1", 00:23:53.475 "core_mask": "0x2", 00:23:53.475 "workload": "verify", 00:23:53.475 "status": "finished", 00:23:53.475 "verify_range": { 00:23:53.475 "start": 0, 00:23:53.475 "length": 8192 00:23:53.475 }, 00:23:53.475 "queue_depth": 128, 00:23:53.475 "io_size": 4096, 00:23:53.475 "runtime": 1.016209, 00:23:53.475 "iops": 4176.306251961949, 00:23:53.475 "mibps": 16.313696296726363, 00:23:53.475 "io_failed": 0, 00:23:53.475 "io_timeout": 0, 00:23:53.475 "avg_latency_us": 30423.49452897087, 00:23:53.475 "min_latency_us": 4681.142857142857, 00:23:53.475 "max_latency_us": 39196.769523809526 00:23:53.475 } 00:23:53.475 ], 00:23:53.475 "core_count": 1 00:23:53.475 } 00:23:53.475 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@236 -- # killprocess 357361 00:23:53.475 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357361 ']' 00:23:53.475 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357361 00:23:53.475 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:53.475 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.475 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357361 00:23:53.475 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:53.475 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:53.475 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357361' 00:23:53.475 killing process with pid 357361 00:23:53.475 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357361 00:23:53.475 Received shutdown signal, test time was about 1.000000 seconds 00:23:53.475 00:23:53.475 Latency(us) 00:23:53.475 [2024-12-15T04:24:07.162Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:53.475 [2024-12-15T04:24:07.162Z] =================================================================================================================== 00:23:53.475 [2024-12-15T04:24:07.162Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:53.475 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357361 00:23:53.734 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # killprocess 357115 00:23:53.734 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 357115 ']' 00:23:53.734 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 357115 00:23:53.734 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:53.734 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:53.734 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 357115 00:23:53.734 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:53.734 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:53.734 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 357115' 00:23:53.734 killing process with pid 357115 00:23:53.734 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 357115 00:23:53.734 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 357115 00:23:53.993 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@242 -- # nvmfappstart 00:23:53.993 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:53.993 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:53.993 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.993 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=358121 00:23:53.993 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:23:53.993 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 358121 00:23:53.993 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 358121 ']' 00:23:53.993 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.993 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:53.993 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.993 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:53.993 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:53.993 [2024-12-15 05:24:07.580795] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:53.993 [2024-12-15 05:24:07.580846] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.993 [2024-12-15 05:24:07.657620] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.993 [2024-12-15 05:24:07.678341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.993 [2024-12-15 05:24:07.678375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.993 [2024-12-15 05:24:07.678382] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.993 [2024-12-15 05:24:07.678388] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.993 [2024-12-15 05:24:07.678393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.993 [2024-12-15 05:24:07.678892] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@243 -- # rpc_cmd 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.252 [2024-12-15 05:24:07.817199] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:54.252 malloc0 00:23:54.252 [2024-12-15 05:24:07.845218] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:54.252 [2024-12-15 05:24:07.845416] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # bdevperf_pid=358216 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # waitforlisten 358216 /var/tmp/bdevperf.sock 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 358216 ']' 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:54.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:54.252 05:24:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:54.252 [2024-12-15 05:24:07.921636] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:54.253 [2024-12-15 05:24:07.921679] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358216 ] 00:23:54.511 [2024-12-15 05:24:07.995605] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.511 [2024-12-15 05:24:08.017847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:54.511 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:54.511 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:54.511 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.eXxkOEzbcm 00:23:54.770 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@260 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:23:54.770 [2024-12-15 05:24:08.444650] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:55.029 nvme0n1 00:23:55.029 05:24:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@264 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:55.029 Running I/O for 1 seconds... 00:23:55.965 4908.00 IOPS, 19.17 MiB/s 00:23:55.965 Latency(us) 00:23:55.965 [2024-12-15T04:24:09.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:55.965 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:23:55.965 Verification LBA range: start 0x0 length 0x2000 00:23:55.965 nvme0n1 : 1.02 4962.34 19.38 0.00 0.00 25624.35 5648.58 26588.89 00:23:55.965 [2024-12-15T04:24:09.652Z] =================================================================================================================== 00:23:55.965 [2024-12-15T04:24:09.652Z] Total : 4962.34 19.38 0.00 0.00 25624.35 5648.58 26588.89 00:23:56.224 { 00:23:56.224 "results": [ 00:23:56.224 { 00:23:56.224 "job": "nvme0n1", 00:23:56.224 "core_mask": "0x2", 00:23:56.224 "workload": "verify", 00:23:56.224 "status": "finished", 00:23:56.224 "verify_range": { 00:23:56.224 "start": 0, 00:23:56.224 "length": 8192 00:23:56.224 }, 00:23:56.224 "queue_depth": 128, 00:23:56.224 "io_size": 4096, 00:23:56.224 "runtime": 1.015045, 00:23:56.224 "iops": 4962.341571063352, 00:23:56.224 "mibps": 19.384146761966218, 00:23:56.224 "io_failed": 0, 00:23:56.224 "io_timeout": 0, 00:23:56.224 "avg_latency_us": 25624.354814373633, 00:23:56.224 "min_latency_us": 5648.579047619048, 00:23:56.224 "max_latency_us": 26588.891428571427 00:23:56.224 } 00:23:56.224 ], 00:23:56.224 "core_count": 1 00:23:56.224 } 00:23:56.224 05:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # rpc_cmd save_config 00:23:56.224 05:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.224 05:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:56.224 05:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.224 05:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@267 -- # tgtcfg='{ 00:23:56.224 "subsystems": [ 00:23:56.224 { 00:23:56.224 "subsystem": "keyring", 00:23:56.224 "config": [ 00:23:56.224 { 00:23:56.224 "method": "keyring_file_add_key", 00:23:56.224 "params": { 00:23:56.224 "name": "key0", 00:23:56.224 "path": "/tmp/tmp.eXxkOEzbcm" 00:23:56.224 } 00:23:56.224 } 00:23:56.224 ] 00:23:56.224 }, 00:23:56.224 { 00:23:56.224 "subsystem": "iobuf", 00:23:56.224 "config": [ 00:23:56.224 { 00:23:56.224 "method": "iobuf_set_options", 00:23:56.224 "params": { 00:23:56.224 "small_pool_count": 8192, 00:23:56.224 "large_pool_count": 1024, 00:23:56.224 "small_bufsize": 8192, 00:23:56.224 "large_bufsize": 135168, 00:23:56.224 "enable_numa": false 00:23:56.224 } 00:23:56.224 } 00:23:56.224 ] 00:23:56.224 }, 00:23:56.224 { 00:23:56.224 "subsystem": "sock", 00:23:56.224 "config": [ 00:23:56.224 { 00:23:56.224 "method": "sock_set_default_impl", 00:23:56.224 "params": { 00:23:56.224 "impl_name": "posix" 00:23:56.224 } 00:23:56.224 }, 00:23:56.224 { 00:23:56.224 "method": "sock_impl_set_options", 00:23:56.224 "params": { 00:23:56.224 "impl_name": "ssl", 00:23:56.224 "recv_buf_size": 4096, 00:23:56.224 "send_buf_size": 4096, 00:23:56.224 "enable_recv_pipe": true, 00:23:56.224 "enable_quickack": false, 00:23:56.224 "enable_placement_id": 0, 00:23:56.224 "enable_zerocopy_send_server": true, 00:23:56.224 "enable_zerocopy_send_client": false, 00:23:56.224 "zerocopy_threshold": 0, 00:23:56.224 "tls_version": 0, 00:23:56.224 "enable_ktls": false 00:23:56.224 } 00:23:56.224 }, 00:23:56.224 { 00:23:56.224 "method": "sock_impl_set_options", 00:23:56.224 "params": { 00:23:56.224 "impl_name": "posix", 00:23:56.224 "recv_buf_size": 2097152, 00:23:56.224 "send_buf_size": 2097152, 00:23:56.224 "enable_recv_pipe": true, 00:23:56.224 "enable_quickack": false, 00:23:56.224 "enable_placement_id": 0, 00:23:56.224 "enable_zerocopy_send_server": true, 00:23:56.224 "enable_zerocopy_send_client": false, 00:23:56.224 "zerocopy_threshold": 0, 00:23:56.224 "tls_version": 0, 00:23:56.224 "enable_ktls": false 00:23:56.224 } 00:23:56.224 } 00:23:56.224 ] 00:23:56.224 }, 00:23:56.224 { 00:23:56.224 "subsystem": "vmd", 00:23:56.224 "config": [] 00:23:56.224 }, 00:23:56.224 { 00:23:56.224 "subsystem": "accel", 00:23:56.224 "config": [ 00:23:56.224 { 00:23:56.224 "method": "accel_set_options", 00:23:56.224 "params": { 00:23:56.224 "small_cache_size": 128, 00:23:56.224 "large_cache_size": 16, 00:23:56.224 "task_count": 2048, 00:23:56.224 "sequence_count": 2048, 00:23:56.224 "buf_count": 2048 00:23:56.224 } 00:23:56.224 } 00:23:56.224 ] 00:23:56.224 }, 00:23:56.224 { 00:23:56.224 "subsystem": "bdev", 00:23:56.224 "config": [ 00:23:56.224 { 00:23:56.224 "method": "bdev_set_options", 00:23:56.224 "params": { 00:23:56.224 "bdev_io_pool_size": 65535, 00:23:56.224 "bdev_io_cache_size": 256, 00:23:56.224 "bdev_auto_examine": true, 00:23:56.224 "iobuf_small_cache_size": 128, 00:23:56.224 "iobuf_large_cache_size": 16 00:23:56.224 } 00:23:56.224 }, 00:23:56.224 { 00:23:56.224 "method": "bdev_raid_set_options", 00:23:56.224 "params": { 00:23:56.224 "process_window_size_kb": 1024, 00:23:56.224 "process_max_bandwidth_mb_sec": 0 00:23:56.224 } 00:23:56.224 }, 00:23:56.224 { 00:23:56.224 "method": "bdev_iscsi_set_options", 00:23:56.224 "params": { 00:23:56.224 "timeout_sec": 30 00:23:56.224 } 00:23:56.224 }, 00:23:56.224 { 00:23:56.224 "method": "bdev_nvme_set_options", 00:23:56.224 "params": { 00:23:56.224 "action_on_timeout": "none", 00:23:56.224 "timeout_us": 0, 00:23:56.224 "timeout_admin_us": 0, 00:23:56.224 "keep_alive_timeout_ms": 10000, 00:23:56.224 "arbitration_burst": 0, 00:23:56.224 "low_priority_weight": 0, 00:23:56.224 "medium_priority_weight": 0, 00:23:56.224 "high_priority_weight": 0, 00:23:56.224 "nvme_adminq_poll_period_us": 10000, 00:23:56.224 "nvme_ioq_poll_period_us": 0, 00:23:56.224 "io_queue_requests": 0, 00:23:56.224 "delay_cmd_submit": true, 00:23:56.224 "transport_retry_count": 4, 00:23:56.224 "bdev_retry_count": 3, 00:23:56.224 "transport_ack_timeout": 0, 00:23:56.224 "ctrlr_loss_timeout_sec": 0, 00:23:56.224 "reconnect_delay_sec": 0, 00:23:56.224 "fast_io_fail_timeout_sec": 0, 00:23:56.224 "disable_auto_failback": false, 00:23:56.224 "generate_uuids": false, 00:23:56.224 "transport_tos": 0, 00:23:56.224 "nvme_error_stat": false, 00:23:56.224 "rdma_srq_size": 0, 00:23:56.224 "io_path_stat": false, 00:23:56.224 "allow_accel_sequence": false, 00:23:56.224 "rdma_max_cq_size": 0, 00:23:56.224 "rdma_cm_event_timeout_ms": 0, 00:23:56.224 "dhchap_digests": [ 00:23:56.224 "sha256", 00:23:56.224 "sha384", 00:23:56.224 "sha512" 00:23:56.224 ], 00:23:56.224 "dhchap_dhgroups": [ 00:23:56.224 "null", 00:23:56.224 "ffdhe2048", 00:23:56.224 "ffdhe3072", 00:23:56.224 "ffdhe4096", 00:23:56.224 "ffdhe6144", 00:23:56.224 "ffdhe8192" 00:23:56.224 ], 00:23:56.224 "rdma_umr_per_io": false 00:23:56.224 } 00:23:56.224 }, 00:23:56.224 { 00:23:56.224 "method": "bdev_nvme_set_hotplug", 00:23:56.224 "params": { 00:23:56.224 "period_us": 100000, 00:23:56.224 "enable": false 00:23:56.224 } 00:23:56.224 }, 00:23:56.224 { 00:23:56.224 "method": "bdev_malloc_create", 00:23:56.224 "params": { 00:23:56.224 "name": "malloc0", 00:23:56.224 "num_blocks": 8192, 00:23:56.224 "block_size": 4096, 00:23:56.224 "physical_block_size": 4096, 00:23:56.224 "uuid": "5537d4e2-ef0d-4621-9a79-da9514f94137", 00:23:56.224 "optimal_io_boundary": 0, 00:23:56.224 "md_size": 0, 00:23:56.224 "dif_type": 0, 00:23:56.224 "dif_is_head_of_md": false, 00:23:56.224 "dif_pi_format": 0 00:23:56.224 } 00:23:56.224 }, 00:23:56.225 { 00:23:56.225 "method": "bdev_wait_for_examine" 00:23:56.225 } 00:23:56.225 ] 00:23:56.225 }, 00:23:56.225 { 00:23:56.225 "subsystem": "nbd", 00:23:56.225 "config": [] 00:23:56.225 }, 00:23:56.225 { 00:23:56.225 "subsystem": "scheduler", 00:23:56.225 "config": [ 00:23:56.225 { 00:23:56.225 "method": "framework_set_scheduler", 00:23:56.225 "params": { 00:23:56.225 "name": "static" 00:23:56.225 } 00:23:56.225 } 00:23:56.225 ] 00:23:56.225 }, 00:23:56.225 { 00:23:56.225 "subsystem": "nvmf", 00:23:56.225 "config": [ 00:23:56.225 { 00:23:56.225 "method": "nvmf_set_config", 00:23:56.225 "params": { 00:23:56.225 "discovery_filter": "match_any", 00:23:56.225 "admin_cmd_passthru": { 00:23:56.225 "identify_ctrlr": false 00:23:56.225 }, 00:23:56.225 "dhchap_digests": [ 00:23:56.225 "sha256", 00:23:56.225 "sha384", 00:23:56.225 "sha512" 00:23:56.225 ], 00:23:56.225 "dhchap_dhgroups": [ 00:23:56.225 "null", 00:23:56.225 "ffdhe2048", 00:23:56.225 "ffdhe3072", 00:23:56.225 "ffdhe4096", 00:23:56.225 "ffdhe6144", 00:23:56.225 "ffdhe8192" 00:23:56.225 ] 00:23:56.225 } 00:23:56.225 }, 00:23:56.225 { 00:23:56.225 "method": "nvmf_set_max_subsystems", 00:23:56.225 "params": { 00:23:56.225 "max_subsystems": 1024 00:23:56.225 } 00:23:56.225 }, 00:23:56.225 { 00:23:56.225 "method": "nvmf_set_crdt", 00:23:56.225 "params": { 00:23:56.225 "crdt1": 0, 00:23:56.225 "crdt2": 0, 00:23:56.225 "crdt3": 0 00:23:56.225 } 00:23:56.225 }, 00:23:56.225 { 00:23:56.225 "method": "nvmf_create_transport", 00:23:56.225 "params": { 00:23:56.225 "trtype": "TCP", 00:23:56.225 "max_queue_depth": 128, 00:23:56.225 "max_io_qpairs_per_ctrlr": 127, 00:23:56.225 "in_capsule_data_size": 4096, 00:23:56.225 "max_io_size": 131072, 00:23:56.225 "io_unit_size": 131072, 00:23:56.225 "max_aq_depth": 128, 00:23:56.225 "num_shared_buffers": 511, 00:23:56.225 "buf_cache_size": 4294967295, 00:23:56.225 "dif_insert_or_strip": false, 00:23:56.225 "zcopy": false, 00:23:56.225 "c2h_success": false, 00:23:56.225 "sock_priority": 0, 00:23:56.225 "abort_timeout_sec": 1, 00:23:56.225 "ack_timeout": 0, 00:23:56.225 "data_wr_pool_size": 0 00:23:56.225 } 00:23:56.225 }, 00:23:56.225 { 00:23:56.225 "method": "nvmf_create_subsystem", 00:23:56.225 "params": { 00:23:56.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.225 "allow_any_host": false, 00:23:56.225 "serial_number": "00000000000000000000", 00:23:56.225 "model_number": "SPDK bdev Controller", 00:23:56.225 "max_namespaces": 32, 00:23:56.225 "min_cntlid": 1, 00:23:56.225 "max_cntlid": 65519, 00:23:56.225 "ana_reporting": false 00:23:56.225 } 00:23:56.225 }, 00:23:56.225 { 00:23:56.225 "method": "nvmf_subsystem_add_host", 00:23:56.225 "params": { 00:23:56.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.225 "host": "nqn.2016-06.io.spdk:host1", 00:23:56.225 "psk": "key0" 00:23:56.225 } 00:23:56.225 }, 00:23:56.225 { 00:23:56.225 "method": "nvmf_subsystem_add_ns", 00:23:56.225 "params": { 00:23:56.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.225 "namespace": { 00:23:56.225 "nsid": 1, 00:23:56.225 "bdev_name": "malloc0", 00:23:56.225 "nguid": "5537D4E2EF0D46219A79DA9514F94137", 00:23:56.225 "uuid": "5537d4e2-ef0d-4621-9a79-da9514f94137", 00:23:56.225 "no_auto_visible": false 00:23:56.225 } 00:23:56.225 } 00:23:56.225 }, 00:23:56.225 { 00:23:56.225 "method": "nvmf_subsystem_add_listener", 00:23:56.225 "params": { 00:23:56.225 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.225 "listen_address": { 00:23:56.225 "trtype": "TCP", 00:23:56.225 "adrfam": "IPv4", 00:23:56.225 "traddr": "10.0.0.2", 00:23:56.225 "trsvcid": "4420" 00:23:56.225 }, 00:23:56.225 "secure_channel": false, 00:23:56.225 "sock_impl": "ssl" 00:23:56.225 } 00:23:56.225 } 00:23:56.225 ] 00:23:56.225 } 00:23:56.225 ] 00:23:56.225 }' 00:23:56.225 05:24:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:23:56.484 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # bperfcfg='{ 00:23:56.484 "subsystems": [ 00:23:56.484 { 00:23:56.484 "subsystem": "keyring", 00:23:56.484 "config": [ 00:23:56.484 { 00:23:56.484 "method": "keyring_file_add_key", 00:23:56.484 "params": { 00:23:56.484 "name": "key0", 00:23:56.484 "path": "/tmp/tmp.eXxkOEzbcm" 00:23:56.484 } 00:23:56.484 } 00:23:56.484 ] 00:23:56.484 }, 00:23:56.484 { 00:23:56.484 "subsystem": "iobuf", 00:23:56.484 "config": [ 00:23:56.484 { 00:23:56.484 "method": "iobuf_set_options", 00:23:56.484 "params": { 00:23:56.484 "small_pool_count": 8192, 00:23:56.484 "large_pool_count": 1024, 00:23:56.484 "small_bufsize": 8192, 00:23:56.484 "large_bufsize": 135168, 00:23:56.484 "enable_numa": false 00:23:56.484 } 00:23:56.484 } 00:23:56.484 ] 00:23:56.484 }, 00:23:56.484 { 00:23:56.484 "subsystem": "sock", 00:23:56.484 "config": [ 00:23:56.484 { 00:23:56.484 "method": "sock_set_default_impl", 00:23:56.484 "params": { 00:23:56.484 "impl_name": "posix" 00:23:56.484 } 00:23:56.484 }, 00:23:56.484 { 00:23:56.484 "method": "sock_impl_set_options", 00:23:56.484 "params": { 00:23:56.484 "impl_name": "ssl", 00:23:56.484 "recv_buf_size": 4096, 00:23:56.484 "send_buf_size": 4096, 00:23:56.484 "enable_recv_pipe": true, 00:23:56.484 "enable_quickack": false, 00:23:56.484 "enable_placement_id": 0, 00:23:56.484 "enable_zerocopy_send_server": true, 00:23:56.484 "enable_zerocopy_send_client": false, 00:23:56.484 "zerocopy_threshold": 0, 00:23:56.484 "tls_version": 0, 00:23:56.484 "enable_ktls": false 00:23:56.484 } 00:23:56.484 }, 00:23:56.484 { 00:23:56.484 "method": "sock_impl_set_options", 00:23:56.484 "params": { 00:23:56.484 "impl_name": "posix", 00:23:56.484 "recv_buf_size": 2097152, 00:23:56.484 "send_buf_size": 2097152, 00:23:56.484 "enable_recv_pipe": true, 00:23:56.484 "enable_quickack": false, 00:23:56.484 "enable_placement_id": 0, 00:23:56.484 "enable_zerocopy_send_server": true, 00:23:56.484 "enable_zerocopy_send_client": false, 00:23:56.484 "zerocopy_threshold": 0, 00:23:56.484 "tls_version": 0, 00:23:56.484 "enable_ktls": false 00:23:56.484 } 00:23:56.484 } 00:23:56.484 ] 00:23:56.484 }, 00:23:56.484 { 00:23:56.484 "subsystem": "vmd", 00:23:56.484 "config": [] 00:23:56.484 }, 00:23:56.484 { 00:23:56.484 "subsystem": "accel", 00:23:56.484 "config": [ 00:23:56.484 { 00:23:56.484 "method": "accel_set_options", 00:23:56.484 "params": { 00:23:56.484 "small_cache_size": 128, 00:23:56.484 "large_cache_size": 16, 00:23:56.484 "task_count": 2048, 00:23:56.484 "sequence_count": 2048, 00:23:56.484 "buf_count": 2048 00:23:56.484 } 00:23:56.484 } 00:23:56.484 ] 00:23:56.484 }, 00:23:56.484 { 00:23:56.484 "subsystem": "bdev", 00:23:56.484 "config": [ 00:23:56.484 { 00:23:56.484 "method": "bdev_set_options", 00:23:56.484 "params": { 00:23:56.484 "bdev_io_pool_size": 65535, 00:23:56.484 "bdev_io_cache_size": 256, 00:23:56.484 "bdev_auto_examine": true, 00:23:56.484 "iobuf_small_cache_size": 128, 00:23:56.484 "iobuf_large_cache_size": 16 00:23:56.484 } 00:23:56.484 }, 00:23:56.484 { 00:23:56.484 "method": "bdev_raid_set_options", 00:23:56.484 "params": { 00:23:56.484 "process_window_size_kb": 1024, 00:23:56.484 "process_max_bandwidth_mb_sec": 0 00:23:56.485 } 00:23:56.485 }, 00:23:56.485 { 00:23:56.485 "method": "bdev_iscsi_set_options", 00:23:56.485 "params": { 00:23:56.485 "timeout_sec": 30 00:23:56.485 } 00:23:56.485 }, 00:23:56.485 { 00:23:56.485 "method": "bdev_nvme_set_options", 00:23:56.485 "params": { 00:23:56.485 "action_on_timeout": "none", 00:23:56.485 "timeout_us": 0, 00:23:56.485 "timeout_admin_us": 0, 00:23:56.485 "keep_alive_timeout_ms": 10000, 00:23:56.485 "arbitration_burst": 0, 00:23:56.485 "low_priority_weight": 0, 00:23:56.485 "medium_priority_weight": 0, 00:23:56.485 "high_priority_weight": 0, 00:23:56.485 "nvme_adminq_poll_period_us": 10000, 00:23:56.485 "nvme_ioq_poll_period_us": 0, 00:23:56.485 "io_queue_requests": 512, 00:23:56.485 "delay_cmd_submit": true, 00:23:56.485 "transport_retry_count": 4, 00:23:56.485 "bdev_retry_count": 3, 00:23:56.485 "transport_ack_timeout": 0, 00:23:56.485 "ctrlr_loss_timeout_sec": 0, 00:23:56.485 "reconnect_delay_sec": 0, 00:23:56.485 "fast_io_fail_timeout_sec": 0, 00:23:56.485 "disable_auto_failback": false, 00:23:56.485 "generate_uuids": false, 00:23:56.485 "transport_tos": 0, 00:23:56.485 "nvme_error_stat": false, 00:23:56.485 "rdma_srq_size": 0, 00:23:56.485 "io_path_stat": false, 00:23:56.485 "allow_accel_sequence": false, 00:23:56.485 "rdma_max_cq_size": 0, 00:23:56.485 "rdma_cm_event_timeout_ms": 0, 00:23:56.485 "dhchap_digests": [ 00:23:56.485 "sha256", 00:23:56.485 "sha384", 00:23:56.485 "sha512" 00:23:56.485 ], 00:23:56.485 "dhchap_dhgroups": [ 00:23:56.485 "null", 00:23:56.485 "ffdhe2048", 00:23:56.485 "ffdhe3072", 00:23:56.485 "ffdhe4096", 00:23:56.485 "ffdhe6144", 00:23:56.485 "ffdhe8192" 00:23:56.485 ], 00:23:56.485 "rdma_umr_per_io": false 00:23:56.485 } 00:23:56.485 }, 00:23:56.485 { 00:23:56.485 "method": "bdev_nvme_attach_controller", 00:23:56.485 "params": { 00:23:56.485 "name": "nvme0", 00:23:56.485 "trtype": "TCP", 00:23:56.485 "adrfam": "IPv4", 00:23:56.485 "traddr": "10.0.0.2", 00:23:56.485 "trsvcid": "4420", 00:23:56.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:56.485 "prchk_reftag": false, 00:23:56.485 "prchk_guard": false, 00:23:56.485 "ctrlr_loss_timeout_sec": 0, 00:23:56.485 "reconnect_delay_sec": 0, 00:23:56.485 "fast_io_fail_timeout_sec": 0, 00:23:56.485 "psk": "key0", 00:23:56.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:56.485 "hdgst": false, 00:23:56.485 "ddgst": false, 00:23:56.485 "multipath": "multipath" 00:23:56.485 } 00:23:56.485 }, 00:23:56.485 { 00:23:56.485 "method": "bdev_nvme_set_hotplug", 00:23:56.485 "params": { 00:23:56.485 "period_us": 100000, 00:23:56.485 "enable": false 00:23:56.485 } 00:23:56.485 }, 00:23:56.485 { 00:23:56.485 "method": "bdev_enable_histogram", 00:23:56.485 "params": { 00:23:56.485 "name": "nvme0n1", 00:23:56.485 "enable": true 00:23:56.485 } 00:23:56.485 }, 00:23:56.485 { 00:23:56.485 "method": "bdev_wait_for_examine" 00:23:56.485 } 00:23:56.485 ] 00:23:56.485 }, 00:23:56.485 { 00:23:56.485 "subsystem": "nbd", 00:23:56.485 "config": [] 00:23:56.485 } 00:23:56.485 ] 00:23:56.485 }' 00:23:56.485 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@270 -- # killprocess 358216 00:23:56.485 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 358216 ']' 00:23:56.485 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 358216 00:23:56.485 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.485 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.485 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358216 00:23:56.485 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:56.485 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:56.485 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358216' 00:23:56.485 killing process with pid 358216 00:23:56.485 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 358216 00:23:56.485 Received shutdown signal, test time was about 1.000000 seconds 00:23:56.485 00:23:56.485 Latency(us) 00:23:56.485 [2024-12-15T04:24:10.172Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:56.485 [2024-12-15T04:24:10.172Z] =================================================================================================================== 00:23:56.485 [2024-12-15T04:24:10.172Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:56.485 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 358216 00:23:56.744 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # killprocess 358121 00:23:56.744 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 358121 ']' 00:23:56.744 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 358121 00:23:56.744 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:23:56.744 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:56.744 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358121 00:23:56.744 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:56.744 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:56.744 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358121' 00:23:56.744 killing process with pid 358121 00:23:56.744 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 358121 00:23:56.744 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 358121 00:23:57.004 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # nvmfappstart -c /dev/fd/62 00:23:57.004 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:23:57.004 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@273 -- # echo '{ 00:23:57.004 "subsystems": [ 00:23:57.004 { 00:23:57.004 "subsystem": "keyring", 00:23:57.004 "config": [ 00:23:57.004 { 00:23:57.004 "method": "keyring_file_add_key", 00:23:57.004 "params": { 00:23:57.004 "name": "key0", 00:23:57.004 "path": "/tmp/tmp.eXxkOEzbcm" 00:23:57.004 } 00:23:57.004 } 00:23:57.004 ] 00:23:57.004 }, 00:23:57.004 { 00:23:57.004 "subsystem": "iobuf", 00:23:57.004 "config": [ 00:23:57.004 { 00:23:57.004 "method": "iobuf_set_options", 00:23:57.004 "params": { 00:23:57.004 "small_pool_count": 8192, 00:23:57.004 "large_pool_count": 1024, 00:23:57.004 "small_bufsize": 8192, 00:23:57.004 "large_bufsize": 135168, 00:23:57.004 "enable_numa": false 00:23:57.004 } 00:23:57.004 } 00:23:57.004 ] 00:23:57.004 }, 00:23:57.004 { 00:23:57.004 "subsystem": "sock", 00:23:57.004 "config": [ 00:23:57.004 { 00:23:57.004 "method": "sock_set_default_impl", 00:23:57.004 "params": { 00:23:57.004 "impl_name": "posix" 00:23:57.004 } 00:23:57.004 }, 00:23:57.004 { 00:23:57.004 "method": "sock_impl_set_options", 00:23:57.004 "params": { 00:23:57.004 "impl_name": "ssl", 00:23:57.004 "recv_buf_size": 4096, 00:23:57.004 "send_buf_size": 4096, 00:23:57.004 "enable_recv_pipe": true, 00:23:57.004 "enable_quickack": false, 00:23:57.004 "enable_placement_id": 0, 00:23:57.004 "enable_zerocopy_send_server": true, 00:23:57.004 "enable_zerocopy_send_client": false, 00:23:57.004 "zerocopy_threshold": 0, 00:23:57.004 "tls_version": 0, 00:23:57.004 "enable_ktls": false 00:23:57.004 } 00:23:57.004 }, 00:23:57.004 { 00:23:57.004 "method": "sock_impl_set_options", 00:23:57.004 "params": { 00:23:57.004 "impl_name": "posix", 00:23:57.004 "recv_buf_size": 2097152, 00:23:57.004 "send_buf_size": 2097152, 00:23:57.004 "enable_recv_pipe": true, 00:23:57.004 "enable_quickack": false, 00:23:57.004 "enable_placement_id": 0, 00:23:57.004 "enable_zerocopy_send_server": true, 00:23:57.004 "enable_zerocopy_send_client": false, 00:23:57.004 "zerocopy_threshold": 0, 00:23:57.004 "tls_version": 0, 00:23:57.004 "enable_ktls": false 00:23:57.004 } 00:23:57.004 } 00:23:57.004 ] 00:23:57.004 }, 00:23:57.004 { 00:23:57.004 "subsystem": "vmd", 00:23:57.004 "config": [] 00:23:57.004 }, 00:23:57.004 { 00:23:57.004 "subsystem": "accel", 00:23:57.004 "config": [ 00:23:57.004 { 00:23:57.004 "method": "accel_set_options", 00:23:57.004 "params": { 00:23:57.004 "small_cache_size": 128, 00:23:57.004 "large_cache_size": 16, 00:23:57.004 "task_count": 2048, 00:23:57.004 "sequence_count": 2048, 00:23:57.004 "buf_count": 2048 00:23:57.004 } 00:23:57.004 } 00:23:57.004 ] 00:23:57.004 }, 00:23:57.004 { 00:23:57.004 "subsystem": "bdev", 00:23:57.004 "config": [ 00:23:57.004 { 00:23:57.004 "method": "bdev_set_options", 00:23:57.004 "params": { 00:23:57.004 "bdev_io_pool_size": 65535, 00:23:57.004 "bdev_io_cache_size": 256, 00:23:57.004 "bdev_auto_examine": true, 00:23:57.004 "iobuf_small_cache_size": 128, 00:23:57.004 "iobuf_large_cache_size": 16 00:23:57.004 } 00:23:57.004 }, 00:23:57.004 { 00:23:57.004 "method": "bdev_raid_set_options", 00:23:57.004 "params": { 00:23:57.004 "process_window_size_kb": 1024, 00:23:57.004 "process_max_bandwidth_mb_sec": 0 00:23:57.004 } 00:23:57.004 }, 00:23:57.004 { 00:23:57.004 "method": "bdev_iscsi_set_options", 00:23:57.004 "params": { 00:23:57.004 "timeout_sec": 30 00:23:57.004 } 00:23:57.004 }, 00:23:57.004 { 00:23:57.004 "method": "bdev_nvme_set_options", 00:23:57.004 "params": { 00:23:57.004 "action_on_timeout": "none", 00:23:57.004 "timeout_us": 0, 00:23:57.004 "timeout_admin_us": 0, 00:23:57.004 "keep_alive_timeout_ms": 10000, 00:23:57.004 "arbitration_burst": 0, 00:23:57.004 "low_priority_weight": 0, 00:23:57.004 "medium_priority_weight": 0, 00:23:57.004 "high_priority_weight": 0, 00:23:57.004 "nvme_adminq_poll_period_us": 10000, 00:23:57.004 "nvme_ioq_poll_period_us": 0, 00:23:57.004 "io_queue_requests": 0, 00:23:57.004 "delay_cmd_submit": true, 00:23:57.004 "transport_retry_count": 4, 00:23:57.004 "bdev_retry_count": 3, 00:23:57.004 "transport_ack_timeout": 0, 00:23:57.004 "ctrlr_loss_timeout_sec": 0, 00:23:57.004 "reconnect_delay_sec": 0, 00:23:57.004 "fast_io_fail_timeout_sec": 0, 00:23:57.004 "disable_auto_failback": false, 00:23:57.004 "generate_uuids": false, 00:23:57.004 "transport_tos": 0, 00:23:57.004 "nvme_error_stat": false, 00:23:57.004 "rdma_srq_size": 0, 00:23:57.004 "io_path_stat": false, 00:23:57.004 "allow_accel_sequence": false, 00:23:57.004 "rdma_max_cq_size": 0, 00:23:57.004 "rdma_cm_event_timeout_ms": 0, 00:23:57.004 "dhchap_digests": [ 00:23:57.004 "sha256", 00:23:57.004 "sha384", 00:23:57.004 "sha512" 00:23:57.004 ], 00:23:57.004 "dhchap_dhgroups": [ 00:23:57.004 "null", 00:23:57.004 "ffdhe2048", 00:23:57.004 "ffdhe3072", 00:23:57.004 "ffdhe4096", 00:23:57.004 "ffdhe6144", 00:23:57.004 "ffdhe8192" 00:23:57.004 ], 00:23:57.004 "rdma_umr_per_io": false 00:23:57.004 } 00:23:57.004 }, 00:23:57.004 { 00:23:57.004 "method": "bdev_nvme_set_hotplug", 00:23:57.004 "params": { 00:23:57.004 "period_us": 100000, 00:23:57.004 "enable": false 00:23:57.004 } 00:23:57.005 }, 00:23:57.005 { 00:23:57.005 "method": "bdev_malloc_create", 00:23:57.005 "params": { 00:23:57.005 "name": "malloc0", 00:23:57.005 "num_blocks": 8192, 00:23:57.005 "block_size": 4096, 00:23:57.005 "physical_block_size": 4096, 00:23:57.005 "uuid": "5537d4e2-ef0d-4621-9a79-da9514f94137", 00:23:57.005 "optimal_io_boundary": 0, 00:23:57.005 "md_size": 0, 00:23:57.005 "dif_type": 0, 00:23:57.005 "dif_is_head_of_md": false, 00:23:57.005 "dif_pi_format": 0 00:23:57.005 } 00:23:57.005 }, 00:23:57.005 { 00:23:57.005 "method": "bdev_wait_for_examine" 00:23:57.005 } 00:23:57.005 ] 00:23:57.005 }, 00:23:57.005 { 00:23:57.005 "subsystem": "nbd", 00:23:57.005 "config": [] 00:23:57.005 }, 00:23:57.005 { 00:23:57.005 "subsystem": "scheduler", 00:23:57.005 "config": [ 00:23:57.005 { 00:23:57.005 "method": "framework_set_scheduler", 00:23:57.005 "params": { 00:23:57.005 "name": "static" 00:23:57.005 } 00:23:57.005 } 00:23:57.005 ] 00:23:57.005 }, 00:23:57.005 { 00:23:57.005 "subsystem": "nvmf", 00:23:57.005 "config": [ 00:23:57.005 { 00:23:57.005 "method": "nvmf_set_config", 00:23:57.005 "params": { 00:23:57.005 "discovery_filter": "match_any", 00:23:57.005 "admin_cmd_passthru": { 00:23:57.005 "identify_ctrlr": false 00:23:57.005 }, 00:23:57.005 "dhchap_digests": [ 00:23:57.005 "sha256", 00:23:57.005 "sha384", 00:23:57.005 "sha512" 00:23:57.005 ], 00:23:57.005 "dhchap_dhgroups": [ 00:23:57.005 "null", 00:23:57.005 "ffdhe2048", 00:23:57.005 "ffdhe3072", 00:23:57.005 "ffdhe4096", 00:23:57.005 "ffdhe6144", 00:23:57.005 "ffdhe8192" 00:23:57.005 ] 00:23:57.005 } 00:23:57.005 }, 00:23:57.005 { 00:23:57.005 "method": "nvmf_set_max_subsystems", 00:23:57.005 "params": { 00:23:57.005 "max_subsystems": 1024 00:23:57.005 } 00:23:57.005 }, 00:23:57.005 { 00:23:57.005 "method": "nvmf_set_crdt", 00:23:57.005 "params": { 00:23:57.005 "crdt1": 0, 00:23:57.005 "crdt2": 0, 00:23:57.005 "crdt3": 0 00:23:57.005 } 00:23:57.005 }, 00:23:57.005 { 00:23:57.005 "method": "nvmf_create_transport", 00:23:57.005 "params": { 00:23:57.005 "trtype": "TCP", 00:23:57.005 "max_queue_depth": 128, 00:23:57.005 "max_io_qpairs_per_ctrlr": 127, 00:23:57.005 "in_capsule_data_size": 4096, 00:23:57.005 "max_io_size": 131072, 00:23:57.005 "io_unit_size": 131072, 00:23:57.005 "max_aq_depth": 128, 00:23:57.005 "num_shared_buffers": 511, 00:23:57.005 "buf_cache_size": 4294967295, 00:23:57.005 "dif_insert_or_strip": false, 00:23:57.005 "zcopy": false, 00:23:57.005 "c2h_success": false, 00:23:57.005 "sock_priority": 0, 00:23:57.005 "abort_timeout_sec": 1, 00:23:57.005 "ack_timeout": 0, 00:23:57.005 "data_wr_pool_size": 0 00:23:57.005 } 00:23:57.005 }, 00:23:57.005 { 00:23:57.005 "method": "nvmf_create_subsystem", 00:23:57.005 "params": { 00:23:57.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.005 "allow_any_host": false, 00:23:57.005 "serial_number": "00000000000000000000", 00:23:57.005 "model_number": "SPDK bdev Controller", 00:23:57.005 "max_namespaces": 32, 00:23:57.005 "min_cntlid": 1, 00:23:57.005 "max_cntlid": 65519, 00:23:57.005 "ana_reporting": false 00:23:57.005 } 00:23:57.005 }, 00:23:57.005 { 00:23:57.005 "method": "nvmf_subsystem_add_host", 00:23:57.005 "params": { 00:23:57.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.005 "host": "nqn.2016-06.io.spdk:host1", 00:23:57.005 "psk": "key0" 00:23:57.005 } 00:23:57.005 }, 00:23:57.005 { 00:23:57.005 "method": "nvmf_subsystem_add_ns", 00:23:57.005 "params": { 00:23:57.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.005 "namespace": { 00:23:57.005 "nsid": 1, 00:23:57.005 "bdev_name": "malloc0", 00:23:57.005 "nguid": "5537D4E2EF0D46219A79DA9514F94137", 00:23:57.005 "uuid": "5537d4e2-ef0d-4621-9a79-da9514f94137", 00:23:57.005 "no_auto_visible": false 00:23:57.005 } 00:23:57.005 } 00:23:57.005 }, 00:23:57.005 { 00:23:57.005 "method": "nvmf_subsystem_add_listener", 00:23:57.005 "params": { 00:23:57.005 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.005 "listen_address": { 00:23:57.005 "trtype": "TCP", 00:23:57.005 "adrfam": "IPv4", 00:23:57.005 "traddr": "10.0.0.2", 00:23:57.005 "trsvcid": "4420" 00:23:57.005 }, 00:23:57.005 "secure_channel": false, 00:23:57.005 "sock_impl": "ssl" 00:23:57.005 } 00:23:57.005 } 00:23:57.005 ] 00:23:57.005 } 00:23:57.005 ] 00:23:57.005 }' 00:23:57.005 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:57.005 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.005 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@509 -- # nvmfpid=358695 00:23:57.005 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:23:57.005 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@510 -- # waitforlisten 358695 00:23:57.005 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 358695 ']' 00:23:57.005 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:57.005 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.005 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:57.005 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:57.005 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.005 05:24:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.005 [2024-12-15 05:24:10.526413] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:57.005 [2024-12-15 05:24:10.526460] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:57.005 [2024-12-15 05:24:10.602439] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.005 [2024-12-15 05:24:10.621713] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:57.005 [2024-12-15 05:24:10.621745] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:57.005 [2024-12-15 05:24:10.621752] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:57.005 [2024-12-15 05:24:10.621758] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:57.005 [2024-12-15 05:24:10.621764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:57.005 [2024-12-15 05:24:10.622291] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:57.264 [2024-12-15 05:24:10.831282] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:57.264 [2024-12-15 05:24:10.863305] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:23:57.264 [2024-12-15 05:24:10.863503] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:57.841 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:57.841 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:57.841 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:23:57.841 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:57.841 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.841 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:57.841 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@276 -- # bdevperf_pid=358929 00:23:57.841 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:23:57.841 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # waitforlisten 358929 /var/tmp/bdevperf.sock 00:23:57.841 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 358929 ']' 00:23:57.841 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:57.841 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # echo '{ 00:23:57.841 "subsystems": [ 00:23:57.841 { 00:23:57.841 "subsystem": "keyring", 00:23:57.841 "config": [ 00:23:57.841 { 00:23:57.841 "method": "keyring_file_add_key", 00:23:57.841 "params": { 00:23:57.841 "name": "key0", 00:23:57.841 "path": "/tmp/tmp.eXxkOEzbcm" 00:23:57.841 } 00:23:57.841 } 00:23:57.841 ] 00:23:57.841 }, 00:23:57.841 { 00:23:57.841 "subsystem": "iobuf", 00:23:57.841 "config": [ 00:23:57.841 { 00:23:57.841 "method": "iobuf_set_options", 00:23:57.841 "params": { 00:23:57.841 "small_pool_count": 8192, 00:23:57.841 "large_pool_count": 1024, 00:23:57.841 "small_bufsize": 8192, 00:23:57.841 "large_bufsize": 135168, 00:23:57.841 "enable_numa": false 00:23:57.841 } 00:23:57.841 } 00:23:57.841 ] 00:23:57.841 }, 00:23:57.841 { 00:23:57.841 "subsystem": "sock", 00:23:57.841 "config": [ 00:23:57.841 { 00:23:57.841 "method": "sock_set_default_impl", 00:23:57.841 "params": { 00:23:57.841 "impl_name": "posix" 00:23:57.841 } 00:23:57.841 }, 00:23:57.841 { 00:23:57.841 "method": "sock_impl_set_options", 00:23:57.841 "params": { 00:23:57.841 "impl_name": "ssl", 00:23:57.841 "recv_buf_size": 4096, 00:23:57.841 "send_buf_size": 4096, 00:23:57.841 "enable_recv_pipe": true, 00:23:57.841 "enable_quickack": false, 00:23:57.841 "enable_placement_id": 0, 00:23:57.841 "enable_zerocopy_send_server": true, 00:23:57.841 "enable_zerocopy_send_client": false, 00:23:57.841 "zerocopy_threshold": 0, 00:23:57.841 "tls_version": 0, 00:23:57.841 "enable_ktls": false 00:23:57.841 } 00:23:57.841 }, 00:23:57.841 { 00:23:57.841 "method": "sock_impl_set_options", 00:23:57.841 "params": { 00:23:57.841 "impl_name": "posix", 00:23:57.841 "recv_buf_size": 2097152, 00:23:57.841 "send_buf_size": 2097152, 00:23:57.841 "enable_recv_pipe": true, 00:23:57.841 "enable_quickack": false, 00:23:57.841 "enable_placement_id": 0, 00:23:57.841 "enable_zerocopy_send_server": true, 00:23:57.841 "enable_zerocopy_send_client": false, 00:23:57.841 "zerocopy_threshold": 0, 00:23:57.841 "tls_version": 0, 00:23:57.841 "enable_ktls": false 00:23:57.841 } 00:23:57.841 } 00:23:57.841 ] 00:23:57.841 }, 00:23:57.841 { 00:23:57.841 "subsystem": "vmd", 00:23:57.841 "config": [] 00:23:57.841 }, 00:23:57.841 { 00:23:57.841 "subsystem": "accel", 00:23:57.841 "config": [ 00:23:57.841 { 00:23:57.841 "method": "accel_set_options", 00:23:57.841 "params": { 00:23:57.841 "small_cache_size": 128, 00:23:57.841 "large_cache_size": 16, 00:23:57.841 "task_count": 2048, 00:23:57.841 "sequence_count": 2048, 00:23:57.841 "buf_count": 2048 00:23:57.841 } 00:23:57.841 } 00:23:57.841 ] 00:23:57.841 }, 00:23:57.841 { 00:23:57.841 "subsystem": "bdev", 00:23:57.841 "config": [ 00:23:57.841 { 00:23:57.841 "method": "bdev_set_options", 00:23:57.841 "params": { 00:23:57.841 "bdev_io_pool_size": 65535, 00:23:57.841 "bdev_io_cache_size": 256, 00:23:57.841 "bdev_auto_examine": true, 00:23:57.841 "iobuf_small_cache_size": 128, 00:23:57.841 "iobuf_large_cache_size": 16 00:23:57.841 } 00:23:57.841 }, 00:23:57.841 { 00:23:57.841 "method": "bdev_raid_set_options", 00:23:57.841 "params": { 00:23:57.841 "process_window_size_kb": 1024, 00:23:57.841 "process_max_bandwidth_mb_sec": 0 00:23:57.841 } 00:23:57.841 }, 00:23:57.841 { 00:23:57.841 "method": "bdev_iscsi_set_options", 00:23:57.841 "params": { 00:23:57.841 "timeout_sec": 30 00:23:57.841 } 00:23:57.841 }, 00:23:57.841 { 00:23:57.841 "method": "bdev_nvme_set_options", 00:23:57.841 "params": { 00:23:57.841 "action_on_timeout": "none", 00:23:57.841 "timeout_us": 0, 00:23:57.841 "timeout_admin_us": 0, 00:23:57.841 "keep_alive_timeout_ms": 10000, 00:23:57.841 "arbitration_burst": 0, 00:23:57.841 "low_priority_weight": 0, 00:23:57.841 "medium_priority_weight": 0, 00:23:57.841 "high_priority_weight": 0, 00:23:57.841 "nvme_adminq_poll_period_us": 10000, 00:23:57.841 "nvme_ioq_poll_period_us": 0, 00:23:57.841 "io_queue_requests": 512, 00:23:57.841 "delay_cmd_submit": true, 00:23:57.841 "transport_retry_count": 4, 00:23:57.841 "bdev_retry_count": 3, 00:23:57.841 "transport_ack_timeout": 0, 00:23:57.841 "ctrlr_loss_timeout_sec": 0, 00:23:57.841 "reconnect_delay_sec": 0, 00:23:57.842 "fast_io_fail_timeout_sec": 0, 00:23:57.842 "disable_auto_failback": false, 00:23:57.842 "generate_uuids": false, 00:23:57.842 "transport_tos": 0, 00:23:57.842 "nvme_error_stat": false, 00:23:57.842 "rdma_srq_size": 0, 00:23:57.842 "io_path_stat": false, 00:23:57.842 "allow_accel_sequence": false, 00:23:57.842 "rdma_max_cq_size": 0, 00:23:57.842 "rdma_cm_event_timeout_ms": 0, 00:23:57.842 "dhchap_digests": [ 00:23:57.842 "sha256", 00:23:57.842 "sha384", 00:23:57.842 "sha512" 00:23:57.842 ], 00:23:57.842 "dhchap_dhgroups": [ 00:23:57.842 "null", 00:23:57.842 "ffdhe2048", 00:23:57.842 "ffdhe3072", 00:23:57.842 "ffdhe4096", 00:23:57.842 "ffdhe6144", 00:23:57.842 "ffdhe8192" 00:23:57.842 ], 00:23:57.842 "rdma_umr_per_io": false 00:23:57.842 } 00:23:57.842 }, 00:23:57.842 { 00:23:57.842 "method": "bdev_nvme_attach_controller", 00:23:57.842 "params": { 00:23:57.842 "name": "nvme0", 00:23:57.842 "trtype": "TCP", 00:23:57.842 "adrfam": "IPv4", 00:23:57.842 "traddr": "10.0.0.2", 00:23:57.842 "trsvcid": "4420", 00:23:57.842 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:23:57.842 "prchk_reftag": false, 00:23:57.842 "prchk_guard": false, 00:23:57.842 "ctrlr_loss_timeout_sec": 0, 00:23:57.842 "reconnect_delay_sec": 0, 00:23:57.842 "fast_io_fail_timeout_sec": 0, 00:23:57.842 "psk": "key0", 00:23:57.842 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:23:57.842 "hdgst": false, 00:23:57.842 "ddgst": false, 00:23:57.842 "multipath": "multipath" 00:23:57.842 } 00:23:57.842 }, 00:23:57.842 { 00:23:57.842 "method": "bdev_nvme_set_hotplug", 00:23:57.842 "params": { 00:23:57.842 "period_us": 100000, 00:23:57.842 "enable": false 00:23:57.842 } 00:23:57.842 }, 00:23:57.842 { 00:23:57.842 "method": "bdev_enable_histogram", 00:23:57.842 "params": { 00:23:57.842 "name": "nvme0n1", 00:23:57.842 "enable": true 00:23:57.842 } 00:23:57.842 }, 00:23:57.842 { 00:23:57.842 "method": "bdev_wait_for_examine" 00:23:57.842 } 00:23:57.842 ] 00:23:57.842 }, 00:23:57.842 { 00:23:57.842 "subsystem": "nbd", 00:23:57.842 "config": [] 00:23:57.842 } 00:23:57.842 ] 00:23:57.842 }' 00:23:57.842 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:57.842 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:57.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:57.842 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:57.842 05:24:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:23:57.842 [2024-12-15 05:24:11.431931] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:23:57.842 [2024-12-15 05:24:11.431977] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358929 ] 00:23:57.842 [2024-12-15 05:24:11.503660] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:57.842 [2024-12-15 05:24:11.525523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:23:58.101 [2024-12-15 05:24:11.673024] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:23:58.669 05:24:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:58.669 05:24:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:23:58.669 05:24:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:58.669 05:24:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # jq -r '.[].name' 00:23:58.928 05:24:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@279 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:58.928 05:24:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:58.928 Running I/O for 1 seconds... 00:24:00.305 5106.00 IOPS, 19.95 MiB/s 00:24:00.305 Latency(us) 00:24:00.305 [2024-12-15T04:24:13.992Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.305 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:24:00.305 Verification LBA range: start 0x0 length 0x2000 00:24:00.305 nvme0n1 : 1.02 5141.69 20.08 0.00 0.00 24689.95 6522.39 29085.50 00:24:00.305 [2024-12-15T04:24:13.992Z] =================================================================================================================== 00:24:00.305 [2024-12-15T04:24:13.992Z] Total : 5141.69 20.08 0.00 0.00 24689.95 6522.39 29085.50 00:24:00.305 { 00:24:00.305 "results": [ 00:24:00.305 { 00:24:00.305 "job": "nvme0n1", 00:24:00.305 "core_mask": "0x2", 00:24:00.305 "workload": "verify", 00:24:00.305 "status": "finished", 00:24:00.305 "verify_range": { 00:24:00.305 "start": 0, 00:24:00.305 "length": 8192 00:24:00.305 }, 00:24:00.305 "queue_depth": 128, 00:24:00.305 "io_size": 4096, 00:24:00.305 "runtime": 1.017953, 00:24:00.305 "iops": 5141.6912175709485, 00:24:00.305 "mibps": 20.084731318636518, 00:24:00.305 "io_failed": 0, 00:24:00.305 "io_timeout": 0, 00:24:00.305 "avg_latency_us": 24689.95193187401, 00:24:00.305 "min_latency_us": 6522.392380952381, 00:24:00.305 "max_latency_us": 29085.500952380953 00:24:00.305 } 00:24:00.305 ], 00:24:00.305 "core_count": 1 00:24:00.305 } 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@282 -- # trap - SIGINT SIGTERM EXIT 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@283 -- # cleanup 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:00.306 nvmf_trace.0 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 358929 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 358929 ']' 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 358929 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358929 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358929' 00:24:00.306 killing process with pid 358929 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 358929 00:24:00.306 Received shutdown signal, test time was about 1.000000 seconds 00:24:00.306 00:24:00.306 Latency(us) 00:24:00.306 [2024-12-15T04:24:13.993Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.306 [2024-12-15T04:24:13.993Z] =================================================================================================================== 00:24:00.306 [2024-12-15T04:24:13.993Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 358929 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # sync 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set +e 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:00.306 rmmod nvme_tcp 00:24:00.306 rmmod nvme_fabrics 00:24:00.306 rmmod nvme_keyring 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@128 -- # set -e 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # return 0 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # '[' -n 358695 ']' 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@518 -- # killprocess 358695 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 358695 ']' 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 358695 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:00.306 05:24:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 358695 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 358695' 00:24:00.565 killing process with pid 358695 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 358695 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 358695 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # iptr 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-save 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@791 -- # iptables-restore 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:00.565 05:24:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.101 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:03.101 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Sn95Hk8dQE /tmp/tmp.Abm8grStfn /tmp/tmp.eXxkOEzbcm 00:24:03.101 00:24:03.101 real 1m18.551s 00:24:03.101 user 2m1.502s 00:24:03.101 sys 0m29.036s 00:24:03.101 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:03.101 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:24:03.101 ************************************ 00:24:03.101 END TEST nvmf_tls 00:24:03.101 ************************************ 00:24:03.101 05:24:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:03.101 05:24:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:03.102 ************************************ 00:24:03.102 START TEST nvmf_fips 00:24:03.102 ************************************ 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:24:03.102 * Looking for test storage... 00:24:03.102 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lcov --version 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:03.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.102 --rc genhtml_branch_coverage=1 00:24:03.102 --rc genhtml_function_coverage=1 00:24:03.102 --rc genhtml_legend=1 00:24:03.102 --rc geninfo_all_blocks=1 00:24:03.102 --rc geninfo_unexecuted_blocks=1 00:24:03.102 00:24:03.102 ' 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:03.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.102 --rc genhtml_branch_coverage=1 00:24:03.102 --rc genhtml_function_coverage=1 00:24:03.102 --rc genhtml_legend=1 00:24:03.102 --rc geninfo_all_blocks=1 00:24:03.102 --rc geninfo_unexecuted_blocks=1 00:24:03.102 00:24:03.102 ' 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:03.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.102 --rc genhtml_branch_coverage=1 00:24:03.102 --rc genhtml_function_coverage=1 00:24:03.102 --rc genhtml_legend=1 00:24:03.102 --rc geninfo_all_blocks=1 00:24:03.102 --rc geninfo_unexecuted_blocks=1 00:24:03.102 00:24:03.102 ' 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:03.102 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:03.102 --rc genhtml_branch_coverage=1 00:24:03.102 --rc genhtml_function_coverage=1 00:24:03.102 --rc genhtml_legend=1 00:24:03.102 --rc geninfo_all_blocks=1 00:24:03.102 --rc geninfo_unexecuted_blocks=1 00:24:03.102 00:24:03.102 ' 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # : 0 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:03.102 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:03.102 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:24:03.103 Error setting digest 00:24:03.103 400267D8DB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:24:03.103 400267D8DB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # xtrace_disable 00:24:03.103 05:24:16 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # pci_devs=() 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # net_devs=() 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # e810=() 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # local -ga e810 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # x722=() 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # local -ga x722 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # mlx=() 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@322 -- # local -ga mlx 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:09.667 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:09.667 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:09.667 Found net devices under 0000:af:00.0: cvl_0_0 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:09.667 Found net devices under 0000:af:00.1: cvl_0_1 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@442 -- # is_hw=yes 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.667 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:09.668 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.668 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.375 ms 00:24:09.668 00:24:09.668 --- 10.0.0.2 ping statistics --- 00:24:09.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.668 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.668 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.668 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:24:09.668 00:24:09.668 --- 10.0.0.1 ping statistics --- 00:24:09.668 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.668 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # return 0 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@509 -- # nvmfpid=362872 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@510 -- # waitforlisten 362872 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 362872 ']' 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:09.668 [2024-12-15 05:24:22.756969] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:09.668 [2024-12-15 05:24:22.757031] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.668 [2024-12-15 05:24:22.833472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.668 [2024-12-15 05:24:22.854059] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.668 [2024-12-15 05:24:22.854095] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.668 [2024-12-15 05:24:22.854102] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.668 [2024-12-15 05:24:22.854108] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.668 [2024-12-15 05:24:22.854113] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.668 [2024-12-15 05:24:22.854570] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.Wg2 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.Wg2 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.Wg2 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.Wg2 00:24:09.668 05:24:22 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:09.668 [2024-12-15 05:24:23.165665] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:09.668 [2024-12-15 05:24:23.181678] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:24:09.668 [2024-12-15 05:24:23.181853] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:09.668 malloc0 00:24:09.668 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:09.668 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=362906 00:24:09.668 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:24:09.668 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 362906 /var/tmp/bdevperf.sock 00:24:09.668 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 362906 ']' 00:24:09.668 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:09.668 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:09.668 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:09.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:09.668 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:09.668 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:09.668 [2024-12-15 05:24:23.309734] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:09.668 [2024-12-15 05:24:23.309781] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362906 ] 00:24:09.927 [2024-12-15 05:24:23.382904] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:09.927 [2024-12-15 05:24:23.405153] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:24:09.927 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:09.927 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:24:09.927 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.Wg2 00:24:10.186 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:24:10.186 [2024-12-15 05:24:23.839776] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:24:10.444 TLSTESTn1 00:24:10.444 05:24:23 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:24:10.444 Running I/O for 10 seconds... 00:24:12.753 4868.00 IOPS, 19.02 MiB/s [2024-12-15T04:24:27.374Z] 5254.50 IOPS, 20.53 MiB/s [2024-12-15T04:24:28.309Z] 5149.67 IOPS, 20.12 MiB/s [2024-12-15T04:24:29.244Z] 5229.25 IOPS, 20.43 MiB/s [2024-12-15T04:24:30.179Z] 5308.80 IOPS, 20.74 MiB/s [2024-12-15T04:24:31.115Z] 5344.50 IOPS, 20.88 MiB/s [2024-12-15T04:24:32.049Z] 5343.29 IOPS, 20.87 MiB/s [2024-12-15T04:24:33.425Z] 5396.00 IOPS, 21.08 MiB/s [2024-12-15T04:24:34.362Z] 5398.00 IOPS, 21.09 MiB/s [2024-12-15T04:24:34.362Z] 5425.40 IOPS, 21.19 MiB/s 00:24:20.675 Latency(us) 00:24:20.675 [2024-12-15T04:24:34.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.675 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:20.675 Verification LBA range: start 0x0 length 0x2000 00:24:20.675 TLSTESTn1 : 10.02 5424.46 21.19 0.00 0.00 23553.36 5523.75 33454.57 00:24:20.675 [2024-12-15T04:24:34.362Z] =================================================================================================================== 00:24:20.675 [2024-12-15T04:24:34.362Z] Total : 5424.46 21.19 0.00 0.00 23553.36 5523.75 33454.57 00:24:20.675 { 00:24:20.675 "results": [ 00:24:20.675 { 00:24:20.675 "job": "TLSTESTn1", 00:24:20.675 "core_mask": "0x4", 00:24:20.675 "workload": "verify", 00:24:20.675 "status": "finished", 00:24:20.675 "verify_range": { 00:24:20.675 "start": 0, 00:24:20.675 "length": 8192 00:24:20.675 }, 00:24:20.675 "queue_depth": 128, 00:24:20.675 "io_size": 4096, 00:24:20.675 "runtime": 10.024958, 00:24:20.675 "iops": 5424.461628667173, 00:24:20.675 "mibps": 21.189303236981143, 00:24:20.675 "io_failed": 0, 00:24:20.675 "io_timeout": 0, 00:24:20.675 "avg_latency_us": 23553.35708380182, 00:24:20.675 "min_latency_us": 5523.748571428571, 00:24:20.675 "max_latency_us": 33454.56761904762 00:24:20.675 } 00:24:20.675 ], 00:24:20.675 "core_count": 1 00:24:20.675 } 00:24:20.675 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:24:20.675 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:24:20.675 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:24:20.675 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:24:20.675 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:24:20.675 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:24:20.675 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:24:20.675 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:24:20.675 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:24:20.676 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:24:20.676 nvmf_trace.0 00:24:20.676 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:24:20.676 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 362906 00:24:20.676 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 362906 ']' 00:24:20.676 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 362906 00:24:20.676 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:20.676 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.676 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362906 00:24:20.676 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:20.676 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:20.676 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362906' 00:24:20.676 killing process with pid 362906 00:24:20.676 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 362906 00:24:20.676 Received shutdown signal, test time was about 10.000000 seconds 00:24:20.676 00:24:20.676 Latency(us) 00:24:20.676 [2024-12-15T04:24:34.363Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.676 [2024-12-15T04:24:34.363Z] =================================================================================================================== 00:24:20.676 [2024-12-15T04:24:34.363Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.676 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 362906 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # sync 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set +e 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:20.935 rmmod nvme_tcp 00:24:20.935 rmmod nvme_fabrics 00:24:20.935 rmmod nvme_keyring 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@128 -- # set -e 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # return 0 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@517 -- # '[' -n 362872 ']' 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@518 -- # killprocess 362872 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 362872 ']' 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 362872 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 362872 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 362872' 00:24:20.935 killing process with pid 362872 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 362872 00:24:20.935 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 362872 00:24:21.194 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:21.194 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:21.194 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:21.194 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # iptr 00:24:21.194 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-save 00:24:21.194 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:21.194 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@791 -- # iptables-restore 00:24:21.194 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:21.194 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:21.194 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:21.194 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:21.194 05:24:34 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.139 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:23.139 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.Wg2 00:24:23.139 00:24:23.139 real 0m20.411s 00:24:23.139 user 0m21.897s 00:24:23.139 sys 0m8.874s 00:24:23.139 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:23.139 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:24:23.139 ************************************ 00:24:23.139 END TEST nvmf_fips 00:24:23.139 ************************************ 00:24:23.139 05:24:36 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:23.139 05:24:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:23.139 05:24:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:23.139 05:24:36 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:23.139 ************************************ 00:24:23.139 START TEST nvmf_control_msg_list 00:24:23.139 ************************************ 00:24:23.139 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:24:23.398 * Looking for test storage... 00:24:23.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:23.398 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lcov --version 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:23.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.399 --rc genhtml_branch_coverage=1 00:24:23.399 --rc genhtml_function_coverage=1 00:24:23.399 --rc genhtml_legend=1 00:24:23.399 --rc geninfo_all_blocks=1 00:24:23.399 --rc geninfo_unexecuted_blocks=1 00:24:23.399 00:24:23.399 ' 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:23.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.399 --rc genhtml_branch_coverage=1 00:24:23.399 --rc genhtml_function_coverage=1 00:24:23.399 --rc genhtml_legend=1 00:24:23.399 --rc geninfo_all_blocks=1 00:24:23.399 --rc geninfo_unexecuted_blocks=1 00:24:23.399 00:24:23.399 ' 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:23.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.399 --rc genhtml_branch_coverage=1 00:24:23.399 --rc genhtml_function_coverage=1 00:24:23.399 --rc genhtml_legend=1 00:24:23.399 --rc geninfo_all_blocks=1 00:24:23.399 --rc geninfo_unexecuted_blocks=1 00:24:23.399 00:24:23.399 ' 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:23.399 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:23.399 --rc genhtml_branch_coverage=1 00:24:23.399 --rc genhtml_function_coverage=1 00:24:23.399 --rc genhtml_legend=1 00:24:23.399 --rc geninfo_all_blocks=1 00:24:23.399 --rc geninfo_unexecuted_blocks=1 00:24:23.399 00:24:23.399 ' 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:23.399 05:24:36 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # : 0 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:23.399 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:24:23.399 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:23.400 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:23.400 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:23.400 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:23.400 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:23.400 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:23.400 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:23.400 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:23.400 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:23.400 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:23.400 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # xtrace_disable 00:24:23.400 05:24:37 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # pci_devs=() 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # net_devs=() 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # e810=() 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@320 -- # local -ga e810 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # x722=() 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # local -ga x722 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # mlx=() 00:24:29.973 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@322 -- # local -ga mlx 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:29.974 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:29.974 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:29.974 Found net devices under 0000:af:00.0: cvl_0_0 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:29.974 Found net devices under 0000:af:00.1: cvl_0_1 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@442 -- # is_hw=yes 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:29.974 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:29.974 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:24:29.974 00:24:29.974 --- 10.0.0.2 ping statistics --- 00:24:29.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.974 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:29.974 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:29.974 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:24:29.974 00:24:29.974 --- 10.0.0.1 ping statistics --- 00:24:29.974 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:29.974 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@450 -- # return 0 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:29.974 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@509 -- # nvmfpid=368155 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@510 -- # waitforlisten 368155 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 368155 ']' 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:29.975 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:29.975 05:24:42 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.975 [2024-12-15 05:24:42.928771] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:29.975 [2024-12-15 05:24:42.928812] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:29.975 [2024-12-15 05:24:42.986591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:29.975 [2024-12-15 05:24:43.007771] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:29.975 [2024-12-15 05:24:43.007802] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:29.975 [2024-12-15 05:24:43.007809] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:29.975 [2024-12-15 05:24:43.007817] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:29.975 [2024-12-15 05:24:43.007822] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:29.975 [2024-12-15 05:24:43.008309] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.975 [2024-12-15 05:24:43.138653] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.975 Malloc0 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:29.975 [2024-12-15 05:24:43.174769] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=368174 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=368175 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=368176 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 368174 00:24:29.975 05:24:43 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:29.975 [2024-12-15 05:24:43.263543] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:29.975 [2024-12-15 05:24:43.263729] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:29.975 [2024-12-15 05:24:43.263884] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:30.911 Initializing NVMe Controllers 00:24:30.911 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:30.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:24:30.911 Initialization complete. Launching workers. 00:24:30.911 ======================================================== 00:24:30.911 Latency(us) 00:24:30.911 Device Information : IOPS MiB/s Average min max 00:24:30.911 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 1871.00 7.31 551.00 126.09 41905.11 00:24:30.911 ======================================================== 00:24:30.911 Total : 1871.00 7.31 551.00 126.09 41905.11 00:24:30.911 00:24:30.911 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 368175 00:24:30.911 Initializing NVMe Controllers 00:24:30.911 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:30.911 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:24:30.911 Initialization complete. Launching workers. 00:24:30.911 ======================================================== 00:24:30.911 Latency(us) 00:24:30.911 Device Information : IOPS MiB/s Average min max 00:24:30.911 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 41210.78 40651.04 41939.71 00:24:30.911 ======================================================== 00:24:30.912 Total : 25.00 0.10 41210.78 40651.04 41939.71 00:24:30.912 00:24:30.912 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 368176 00:24:30.912 Initializing NVMe Controllers 00:24:30.912 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:30.912 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:24:30.912 Initialization complete. Launching workers. 00:24:30.912 ======================================================== 00:24:30.912 Latency(us) 00:24:30.912 Device Information : IOPS MiB/s Average min max 00:24:30.912 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 4717.00 18.43 219.18 125.39 41937.12 00:24:30.912 ======================================================== 00:24:30.912 Total : 4717.00 18.43 219.18 125.39 41937.12 00:24:30.912 00:24:30.912 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:30.912 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:24:30.912 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:30.912 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@121 -- # sync 00:24:30.912 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:30.912 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@124 -- # set +e 00:24:30.912 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:30.912 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:30.912 rmmod nvme_tcp 00:24:30.912 rmmod nvme_fabrics 00:24:31.171 rmmod nvme_keyring 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@128 -- # set -e 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # return 0 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@517 -- # '[' -n 368155 ']' 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@518 -- # killprocess 368155 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 368155 ']' 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 368155 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 368155 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 368155' 00:24:31.171 killing process with pid 368155 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 368155 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 368155 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@297 -- # iptr 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-save 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:31.171 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@791 -- # iptables-restore 00:24:31.430 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:31.430 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:31.430 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.430 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.430 05:24:44 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.333 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:33.333 00:24:33.333 real 0m10.121s 00:24:33.333 user 0m7.082s 00:24:33.333 sys 0m5.269s 00:24:33.333 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:33.333 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:24:33.333 ************************************ 00:24:33.333 END TEST nvmf_control_msg_list 00:24:33.333 ************************************ 00:24:33.333 05:24:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:33.333 05:24:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:33.333 05:24:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:33.333 05:24:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:33.333 ************************************ 00:24:33.333 START TEST nvmf_wait_for_buf 00:24:33.333 ************************************ 00:24:33.333 05:24:46 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:24:33.593 * Looking for test storage... 00:24:33.593 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lcov --version 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:33.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.593 --rc genhtml_branch_coverage=1 00:24:33.593 --rc genhtml_function_coverage=1 00:24:33.593 --rc genhtml_legend=1 00:24:33.593 --rc geninfo_all_blocks=1 00:24:33.593 --rc geninfo_unexecuted_blocks=1 00:24:33.593 00:24:33.593 ' 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:33.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.593 --rc genhtml_branch_coverage=1 00:24:33.593 --rc genhtml_function_coverage=1 00:24:33.593 --rc genhtml_legend=1 00:24:33.593 --rc geninfo_all_blocks=1 00:24:33.593 --rc geninfo_unexecuted_blocks=1 00:24:33.593 00:24:33.593 ' 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:33.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.593 --rc genhtml_branch_coverage=1 00:24:33.593 --rc genhtml_function_coverage=1 00:24:33.593 --rc genhtml_legend=1 00:24:33.593 --rc geninfo_all_blocks=1 00:24:33.593 --rc geninfo_unexecuted_blocks=1 00:24:33.593 00:24:33.593 ' 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:33.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:33.593 --rc genhtml_branch_coverage=1 00:24:33.593 --rc genhtml_function_coverage=1 00:24:33.593 --rc genhtml_legend=1 00:24:33.593 --rc geninfo_all_blocks=1 00:24:33.593 --rc geninfo_unexecuted_blocks=1 00:24:33.593 00:24:33.593 ' 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # : 0 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:33.593 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:33.594 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # xtrace_disable 00:24:33.594 05:24:47 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # pci_devs=() 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # net_devs=() 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # e810=() 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@320 -- # local -ga e810 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # x722=() 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # local -ga x722 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # mlx=() 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@322 -- # local -ga mlx 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:40.165 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:40.165 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:40.165 Found net devices under 0000:af:00.0: cvl_0_0 00:24:40.165 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:40.166 Found net devices under 0000:af:00.1: cvl_0_1 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@442 -- # is_hw=yes 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:40.166 05:24:52 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:40.166 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:40.166 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:24:40.166 00:24:40.166 --- 10.0.0.2 ping statistics --- 00:24:40.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.166 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:40.166 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:40.166 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:24:40.166 00:24:40.166 --- 10.0.0.1 ping statistics --- 00:24:40.166 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:40.166 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@450 -- # return 0 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@509 -- # nvmfpid=371865 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@510 -- # waitforlisten 371865 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 371865 ']' 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:40.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.166 [2024-12-15 05:24:53.153815] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:24:40.166 [2024-12-15 05:24:53.153859] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:40.166 [2024-12-15 05:24:53.227669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:40.166 [2024-12-15 05:24:53.249190] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:40.166 [2024-12-15 05:24:53.249224] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:40.166 [2024-12-15 05:24:53.249231] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:40.166 [2024-12-15 05:24:53.249237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:40.166 [2024-12-15 05:24:53.249242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:40.166 [2024-12-15 05:24:53.249724] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.166 Malloc0 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.166 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.166 [2024-12-15 05:24:53.435176] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:40.167 [2024-12-15 05:24:53.463363] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.167 05:24:53 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:40.167 [2024-12-15 05:24:53.548077] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:24:41.548 Initializing NVMe Controllers 00:24:41.548 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:24:41.548 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:24:41.548 Initialization complete. Launching workers. 00:24:41.548 ======================================================== 00:24:41.548 Latency(us) 00:24:41.548 Device Information : IOPS MiB/s Average min max 00:24:41.548 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 27.90 3.49 152371.51 7296.47 198533.27 00:24:41.548 ======================================================== 00:24:41.548 Total : 27.90 3.49 152371.51 7296.47 198533.27 00:24:41.548 00:24:41.548 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:24:41.548 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:24:41.548 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.548 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:41.548 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.548 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=422 00:24:41.548 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 422 -eq 0 ]] 00:24:41.548 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:24:41.548 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:24:41.548 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@516 -- # nvmfcleanup 00:24:41.548 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@121 -- # sync 00:24:41.548 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@124 -- # set +e 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # for i in {1..20} 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:24:41.549 rmmod nvme_tcp 00:24:41.549 rmmod nvme_fabrics 00:24:41.549 rmmod nvme_keyring 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@128 -- # set -e 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # return 0 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@517 -- # '[' -n 371865 ']' 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@518 -- # killprocess 371865 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 371865 ']' 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 371865 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 371865 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 371865' 00:24:41.549 killing process with pid 371865 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 371865 00:24:41.549 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 371865 00:24:41.811 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:24:41.812 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:24:41.812 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:24:41.812 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@297 -- # iptr 00:24:41.812 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-save 00:24:41.812 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:24:41.812 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@791 -- # iptables-restore 00:24:41.812 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:24:41.812 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:24:41.812 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.812 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.812 05:24:55 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:24:44.348 00:24:44.348 real 0m10.449s 00:24:44.348 user 0m4.056s 00:24:44.348 sys 0m4.829s 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:24:44.348 ************************************ 00:24:44.348 END TEST nvmf_wait_for_buf 00:24:44.348 ************************************ 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 1 -eq 1 ']' 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@48 -- # run_test nvmf_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:24:44.348 ************************************ 00:24:44.348 START TEST nvmf_fuzz 00:24:44.348 ************************************ 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fabrics_fuzz.sh --transport=tcp 00:24:44.348 * Looking for test storage... 00:24:44.348 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lcov --version 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@345 -- # : 1 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # decimal 1 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=1 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 1 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # decimal 2 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@353 -- # local d=2 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@355 -- # echo 2 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@368 -- # return 0 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:24:44.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.348 --rc genhtml_branch_coverage=1 00:24:44.348 --rc genhtml_function_coverage=1 00:24:44.348 --rc genhtml_legend=1 00:24:44.348 --rc geninfo_all_blocks=1 00:24:44.348 --rc geninfo_unexecuted_blocks=1 00:24:44.348 00:24:44.348 ' 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:24:44.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.348 --rc genhtml_branch_coverage=1 00:24:44.348 --rc genhtml_function_coverage=1 00:24:44.348 --rc genhtml_legend=1 00:24:44.348 --rc geninfo_all_blocks=1 00:24:44.348 --rc geninfo_unexecuted_blocks=1 00:24:44.348 00:24:44.348 ' 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:24:44.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.348 --rc genhtml_branch_coverage=1 00:24:44.348 --rc genhtml_function_coverage=1 00:24:44.348 --rc genhtml_legend=1 00:24:44.348 --rc geninfo_all_blocks=1 00:24:44.348 --rc geninfo_unexecuted_blocks=1 00:24:44.348 00:24:44.348 ' 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:24:44.348 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:44.348 --rc genhtml_branch_coverage=1 00:24:44.348 --rc genhtml_function_coverage=1 00:24:44.348 --rc genhtml_legend=1 00:24:44.348 --rc geninfo_all_blocks=1 00:24:44.348 --rc geninfo_unexecuted_blocks=1 00:24:44.348 00:24:44.348 ' 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # uname -s 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:44.348 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@5 -- # export PATH 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@51 -- # : 0 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:24:44.349 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@55 -- # have_pci_nics=0 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@11 -- # nvmftestinit 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@476 -- # prepare_net_devs 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@438 -- # local -g is_hw=no 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@440 -- # remove_spdk_ns 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@309 -- # xtrace_disable 00:24:44.349 05:24:57 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # pci_devs=() 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@315 -- # local -a pci_devs 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # pci_net_devs=() 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # pci_drivers=() 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@317 -- # local -A pci_drivers 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # net_devs=() 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@319 -- # local -ga net_devs 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # e810=() 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@320 -- # local -ga e810 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # x722=() 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@321 -- # local -ga x722 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # mlx=() 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@322 -- # local -ga mlx 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:24:50.919 Found 0000:af:00.0 (0x8086 - 0x159b) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:24:50.919 Found 0000:af:00.1 (0x8086 - 0x159b) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:24:50.919 Found net devices under 0000:af:00.0: cvl_0_0 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:50.919 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@418 -- # [[ up == up ]] 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:24:50.920 Found net devices under 0000:af:00.1: cvl_0_1 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@442 -- # is_hw=yes 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:24:50.920 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:50.920 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.371 ms 00:24:50.920 00:24:50.920 --- 10.0.0.2 ping statistics --- 00:24:50.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.920 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:50.920 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:50.920 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:24:50.920 00:24:50.920 --- 10.0.0.1 ping statistics --- 00:24:50.920 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:50.920 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@450 -- # return 0 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@14 -- # nvmfpid=375779 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@13 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@16 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@18 -- # waitforlisten 375779 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@835 -- # '[' -z 375779 ']' 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:50.920 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@868 -- # return 0 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 64 512 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:50.920 Malloc0 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@27 -- # trid='trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' 00:24:50.920 05:25:03 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -N -a 00:25:23.009 Fuzzing completed. Shutting down the fuzz application 00:25:23.009 00:25:23.009 Dumping successful admin opcodes: 00:25:23.009 9, 10, 00:25:23.009 Dumping successful io opcodes: 00:25:23.009 0, 9, 00:25:23.009 NS: 0x2000008eff00 I/O qp, Total commands completed: 1007516, total successful commands: 5901, random_seed: 1722478912 00:25:23.009 NS: 0x2000008eff00 admin qp, Total commands completed: 132000, total successful commands: 29, random_seed: 545445184 00:25:23.009 05:25:34 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -F 'trtype:tcp adrfam:IPv4 subnqn:nqn.2016-06.io.spdk:cnode1 traddr:10.0.0.2 trsvcid:4420' -j /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/example.json -a 00:25:23.009 Fuzzing completed. Shutting down the fuzz application 00:25:23.009 00:25:23.009 Dumping successful admin opcodes: 00:25:23.009 00:25:23.009 Dumping successful io opcodes: 00:25:23.009 00:25:23.009 NS: 0x2000008eff00 I/O qp, Total commands completed: 0, total successful commands: 0, random_seed: 3619233906 00:25:23.009 NS: 0x2000008eff00 admin qp, Total commands completed: 16, total successful commands: 0, random_seed: 3619299018 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@38 -- # nvmftestfini 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@516 -- # nvmfcleanup 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@121 -- # sync 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@124 -- # set +e 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@125 -- # for i in {1..20} 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:25:23.009 rmmod nvme_tcp 00:25:23.009 rmmod nvme_fabrics 00:25:23.009 rmmod nvme_keyring 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@128 -- # set -e 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@129 -- # return 0 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@517 -- # '[' -n 375779 ']' 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@518 -- # killprocess 375779 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@954 -- # '[' -z 375779 ']' 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@958 -- # kill -0 375779 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # uname 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 375779 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 375779' 00:25:23.009 killing process with pid 375779 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@973 -- # kill 375779 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@978 -- # wait 375779 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@297 -- # iptr 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-save 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@791 -- # iptables-restore 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@302 -- # remove_spdk_ns 00:25:23.009 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:23.010 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:23.010 05:25:35 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.388 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:25:24.388 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- target/fabrics_fuzz.sh@39 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs1.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_fuzz_logs2.txt 00:25:24.388 00:25:24.388 real 0m40.326s 00:25:24.388 user 0m53.984s 00:25:24.388 sys 0m15.477s 00:25:24.388 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:24.388 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_fuzz -- common/autotest_common.sh@10 -- # set +x 00:25:24.388 ************************************ 00:25:24.388 END TEST nvmf_fuzz 00:25:24.388 ************************************ 00:25:24.388 05:25:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@49 -- # run_test nvmf_multiconnection /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:24.388 05:25:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:24.388 05:25:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:24.388 05:25:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:25:24.388 ************************************ 00:25:24.388 START TEST nvmf_multiconnection 00:25:24.388 ************************************ 00:25:24.388 05:25:37 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multiconnection.sh --transport=tcp 00:25:24.388 * Looking for test storage... 00:25:24.388 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:24.388 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:24.388 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lcov --version 00:25:24.388 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # IFS=.-: 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@336 -- # read -ra ver1 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # IFS=.-: 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@337 -- # read -ra ver2 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@338 -- # local 'op=<' 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@340 -- # ver1_l=2 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@341 -- # ver2_l=1 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@344 -- # case "$op" in 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@345 -- # : 1 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # decimal 1 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=1 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 1 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@365 -- # ver1[v]=1 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # decimal 2 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@353 -- # local d=2 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@355 -- # echo 2 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@366 -- # ver2[v]=2 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@368 -- # return 0 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:24.649 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:24.649 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.649 --rc genhtml_branch_coverage=1 00:25:24.649 --rc genhtml_function_coverage=1 00:25:24.649 --rc genhtml_legend=1 00:25:24.649 --rc geninfo_all_blocks=1 00:25:24.649 --rc geninfo_unexecuted_blocks=1 00:25:24.649 00:25:24.649 ' 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:24.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.650 --rc genhtml_branch_coverage=1 00:25:24.650 --rc genhtml_function_coverage=1 00:25:24.650 --rc genhtml_legend=1 00:25:24.650 --rc geninfo_all_blocks=1 00:25:24.650 --rc geninfo_unexecuted_blocks=1 00:25:24.650 00:25:24.650 ' 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:24.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.650 --rc genhtml_branch_coverage=1 00:25:24.650 --rc genhtml_function_coverage=1 00:25:24.650 --rc genhtml_legend=1 00:25:24.650 --rc geninfo_all_blocks=1 00:25:24.650 --rc geninfo_unexecuted_blocks=1 00:25:24.650 00:25:24.650 ' 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:24.650 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:24.650 --rc genhtml_branch_coverage=1 00:25:24.650 --rc genhtml_function_coverage=1 00:25:24.650 --rc genhtml_legend=1 00:25:24.650 --rc geninfo_all_blocks=1 00:25:24.650 --rc geninfo_unexecuted_blocks=1 00:25:24.650 00:25:24.650 ' 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # uname -s 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@15 -- # shopt -s extglob 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@5 -- # export PATH 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@51 -- # : 0 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:25:24.650 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@55 -- # have_pci_nics=0 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@14 -- # NVMF_SUBSYS=11 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@16 -- # nvmftestinit 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@476 -- # prepare_net_devs 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@438 -- # local -g is_hw=no 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@440 -- # remove_spdk_ns 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@309 -- # xtrace_disable 00:25:24.650 05:25:38 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # pci_devs=() 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@315 -- # local -a pci_devs 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # pci_net_devs=() 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # pci_drivers=() 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@317 -- # local -A pci_drivers 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # net_devs=() 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@319 -- # local -ga net_devs 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # e810=() 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@320 -- # local -ga e810 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # x722=() 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@321 -- # local -ga x722 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # mlx=() 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@322 -- # local -ga mlx 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:25:31.223 Found 0000:af:00.0 (0x8086 - 0x159b) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:25:31.223 Found 0000:af:00.1 (0x8086 - 0x159b) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:25:31.223 Found net devices under 0000:af:00.0: cvl_0_0 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@418 -- # [[ up == up ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:25:31.223 Found net devices under 0000:af:00.1: cvl_0_1 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@442 -- # is_hw=yes 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:31.223 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:25:31.224 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:25:31.224 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:31.224 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:25:31.224 00:25:31.224 --- 10.0.0.2 ping statistics --- 00:25:31.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.224 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:25:31.224 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:31.224 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:31.224 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.196 ms 00:25:31.224 00:25:31.224 --- 10.0.0.1 ping statistics --- 00:25:31.224 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:31.224 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:25:31.224 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:31.224 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@450 -- # return 0 00:25:31.224 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:25:31.224 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:31.224 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:25:31.224 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:25:31.224 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:31.224 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:25:31.224 05:25:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@17 -- # nvmfappstart -m 0xF 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@509 -- # nvmfpid=384315 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@510 -- # waitforlisten 384315 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@835 -- # '[' -z 384315 ']' 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:31.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 [2024-12-15 05:25:44.058482] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:25:31.224 [2024-12-15 05:25:44.058530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:31.224 [2024-12-15 05:25:44.135100] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:31.224 [2024-12-15 05:25:44.159447] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:31.224 [2024-12-15 05:25:44.159484] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:31.224 [2024-12-15 05:25:44.159491] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:31.224 [2024-12-15 05:25:44.159497] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:31.224 [2024-12-15 05:25:44.159502] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:31.224 [2024-12-15 05:25:44.160851] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:31.224 [2024-12-15 05:25:44.160959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:25:31.224 [2024-12-15 05:25:44.161042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:31.224 [2024-12-15 05:25:44.161042] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@868 -- # return 0 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 [2024-12-15 05:25:44.298131] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # seq 1 11 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 Malloc1 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK1 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 [2024-12-15 05:25:44.359671] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc2 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 Malloc2 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc2 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc3 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 Malloc3 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK3 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Malloc3 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.224 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc4 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 Malloc4 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK4 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Malloc4 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc5 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 Malloc5 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode5 -a -s SPDK5 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode5 Malloc5 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode5 -t tcp -a 10.0.0.2 -s 4420 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc6 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 Malloc6 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode6 -a -s SPDK6 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode6 Malloc6 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode6 -t tcp -a 10.0.0.2 -s 4420 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc7 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 Malloc7 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7 -a -s SPDK7 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode7 Malloc7 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode7 -t tcp -a 10.0.0.2 -s 4420 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc8 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 Malloc8 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode8 -a -s SPDK8 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode8 Malloc8 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode8 -t tcp -a 10.0.0.2 -s 4420 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc9 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 Malloc9 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode9 -a -s SPDK9 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode9 Malloc9 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.225 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode9 -t tcp -a 10.0.0.2 -s 4420 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc10 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.226 Malloc10 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode10 -a -s SPDK10 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode10 Malloc10 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode10 -t tcp -a 10.0.0.2 -s 4420 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@21 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc11 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.226 Malloc11 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode11 -a -s SPDK11 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode11 Malloc11 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode11 -t tcp -a 10.0.0.2 -s 4420 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # seq 1 11 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:31.226 05:25:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:25:32.608 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK1 00:25:32.608 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:32.608 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:32.608 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:32.608 05:25:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:34.524 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:34.524 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:34.524 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK1 00:25:34.524 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:34.524 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:34.524 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:34.524 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:34.524 05:25:47 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode2 -a 10.0.0.2 -s 4420 00:25:35.461 05:25:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK2 00:25:35.461 05:25:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:35.461 05:25:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:35.461 05:25:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:35.461 05:25:49 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:37.998 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:37.998 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:37.998 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK2 00:25:37.998 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:37.998 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:37.998 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:37.998 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:37.998 05:25:51 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode3 -a 10.0.0.2 -s 4420 00:25:38.936 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK3 00:25:38.936 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:38.937 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:38.937 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:38.937 05:25:52 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:40.844 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:40.844 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:40.844 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK3 00:25:40.844 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:40.844 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:40.844 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:40.844 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:40.844 05:25:54 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode4 -a 10.0.0.2 -s 4420 00:25:42.223 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK4 00:25:42.223 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:42.223 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:42.223 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:42.223 05:25:55 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:44.130 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:44.130 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:44.130 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK4 00:25:44.130 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:44.130 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:44.130 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:44.130 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:44.130 05:25:57 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode5 -a 10.0.0.2 -s 4420 00:25:45.509 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK5 00:25:45.509 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:45.509 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:45.510 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:45.510 05:25:58 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:47.416 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:47.416 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:47.416 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK5 00:25:47.416 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:47.416 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:47.416 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:47.416 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:47.416 05:26:00 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode6 -a 10.0.0.2 -s 4420 00:25:48.795 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK6 00:25:48.795 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:48.795 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:48.795 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:48.795 05:26:02 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:50.703 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:50.703 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:50.703 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK6 00:25:50.703 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:50.703 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:50.703 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:50.703 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:50.703 05:26:04 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode7 -a 10.0.0.2 -s 4420 00:25:52.083 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK7 00:25:52.083 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:52.083 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:52.083 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:52.083 05:26:05 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:53.989 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:53.989 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:53.989 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK7 00:25:53.989 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:53.989 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:53.989 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:53.989 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:53.989 05:26:07 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode8 -a 10.0.0.2 -s 4420 00:25:55.369 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK8 00:25:55.369 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:55.369 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:55.369 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:55.369 05:26:08 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:25:57.276 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:25:57.276 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:25:57.276 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK8 00:25:57.276 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:25:57.276 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:25:57.276 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:25:57.276 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:25:57.276 05:26:10 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode9 -a 10.0.0.2 -s 4420 00:25:59.184 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK9 00:25:59.184 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:25:59.184 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:25:59.184 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:25:59.184 05:26:12 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:01.091 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:01.091 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:01.091 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK9 00:26:01.091 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:01.091 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:01.091 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:01.091 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:01.091 05:26:14 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode10 -a 10.0.0.2 -s 4420 00:26:02.029 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK10 00:26:02.029 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:02.029 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:02.029 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:02.029 05:26:15 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:04.567 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:04.567 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:04.567 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK10 00:26:04.567 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:04.567 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:04.567 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:04.567 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@28 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:04.567 05:26:17 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode11 -a 10.0.0.2 -s 4420 00:26:05.505 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@30 -- # waitforserial SPDK11 00:26:05.505 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1202 -- # local i=0 00:26:05.505 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:05.505 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:05.505 05:26:18 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1209 -- # sleep 2 00:26:07.412 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:07.412 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:07.412 05:26:20 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # grep -c SPDK11 00:26:07.412 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:07.412 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:07.412 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1212 -- # return 0 00:26:07.412 05:26:21 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t read -r 10 00:26:07.412 [global] 00:26:07.412 thread=1 00:26:07.412 invalidate=1 00:26:07.412 rw=read 00:26:07.412 time_based=1 00:26:07.412 runtime=10 00:26:07.412 ioengine=libaio 00:26:07.412 direct=1 00:26:07.412 bs=262144 00:26:07.412 iodepth=64 00:26:07.412 norandommap=1 00:26:07.412 numjobs=1 00:26:07.412 00:26:07.412 [job0] 00:26:07.412 filename=/dev/nvme0n1 00:26:07.412 [job1] 00:26:07.412 filename=/dev/nvme10n1 00:26:07.412 [job2] 00:26:07.412 filename=/dev/nvme1n1 00:26:07.412 [job3] 00:26:07.412 filename=/dev/nvme2n1 00:26:07.412 [job4] 00:26:07.412 filename=/dev/nvme3n1 00:26:07.412 [job5] 00:26:07.412 filename=/dev/nvme4n1 00:26:07.412 [job6] 00:26:07.412 filename=/dev/nvme5n1 00:26:07.412 [job7] 00:26:07.412 filename=/dev/nvme6n1 00:26:07.412 [job8] 00:26:07.412 filename=/dev/nvme7n1 00:26:07.412 [job9] 00:26:07.412 filename=/dev/nvme8n1 00:26:07.412 [job10] 00:26:07.412 filename=/dev/nvme9n1 00:26:07.671 Could not set queue depth (nvme0n1) 00:26:07.671 Could not set queue depth (nvme10n1) 00:26:07.671 Could not set queue depth (nvme1n1) 00:26:07.671 Could not set queue depth (nvme2n1) 00:26:07.671 Could not set queue depth (nvme3n1) 00:26:07.671 Could not set queue depth (nvme4n1) 00:26:07.671 Could not set queue depth (nvme5n1) 00:26:07.671 Could not set queue depth (nvme6n1) 00:26:07.671 Could not set queue depth (nvme7n1) 00:26:07.671 Could not set queue depth (nvme8n1) 00:26:07.671 Could not set queue depth (nvme9n1) 00:26:07.931 job0: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.931 job1: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.931 job2: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.931 job3: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.931 job4: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.931 job5: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.931 job6: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.931 job7: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.931 job8: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.931 job9: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.931 job10: (g=0): rw=read, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:07.931 fio-3.35 00:26:07.931 Starting 11 threads 00:26:20.147 00:26:20.147 job0: (groupid=0, jobs=1): err= 0: pid=390671: Sun Dec 15 05:26:31 2024 00:26:20.147 read: IOPS=200, BW=50.1MiB/s (52.6MB/s)(507MiB/10114msec) 00:26:20.147 slat (usec): min=15, max=459178, avg=4322.66, stdev=18325.73 00:26:20.147 clat (usec): min=1669, max=951487, avg=314487.14, stdev=132890.73 00:26:20.147 lat (usec): min=1721, max=951546, avg=318809.80, stdev=133859.19 00:26:20.147 clat percentiles (msec): 00:26:20.147 | 1.00th=[ 3], 5.00th=[ 132], 10.00th=[ 174], 20.00th=[ 232], 00:26:20.147 | 30.00th=[ 259], 40.00th=[ 284], 50.00th=[ 305], 60.00th=[ 330], 00:26:20.147 | 70.00th=[ 355], 80.00th=[ 388], 90.00th=[ 447], 95.00th=[ 498], 00:26:20.147 | 99.00th=[ 793], 99.50th=[ 902], 99.90th=[ 919], 99.95th=[ 919], 00:26:20.147 | 99.99th=[ 953] 00:26:20.147 bw ( KiB/s): min= 9216, max=85504, per=5.21%, avg=50278.40, stdev=15074.95, samples=20 00:26:20.147 iops : min= 36, max= 334, avg=196.40, stdev=58.89, samples=20 00:26:20.147 lat (msec) : 2=0.05%, 4=1.13%, 10=0.59%, 20=0.49%, 50=1.18% 00:26:20.147 lat (msec) : 100=0.54%, 250=20.66%, 500=70.66%, 750=2.47%, 1000=2.22% 00:26:20.147 cpu : usr=0.10%, sys=0.85%, ctx=325, majf=0, minf=3722 00:26:20.147 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.6%, >=64=96.9% 00:26:20.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.147 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.147 issued rwts: total=2028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.147 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.147 job1: (groupid=0, jobs=1): err= 0: pid=390673: Sun Dec 15 05:26:31 2024 00:26:20.147 read: IOPS=214, BW=53.7MiB/s (56.3MB/s)(543MiB/10110msec) 00:26:20.147 slat (usec): min=14, max=496284, avg=4117.74, stdev=19003.00 00:26:20.147 clat (msec): min=9, max=907, avg=293.69, stdev=173.14 00:26:20.147 lat (msec): min=10, max=1094, avg=297.80, stdev=174.59 00:26:20.147 clat percentiles (msec): 00:26:20.147 | 1.00th=[ 13], 5.00th=[ 43], 10.00th=[ 115], 20.00th=[ 174], 00:26:20.147 | 30.00th=[ 226], 40.00th=[ 253], 50.00th=[ 271], 60.00th=[ 296], 00:26:20.147 | 70.00th=[ 321], 80.00th=[ 359], 90.00th=[ 498], 95.00th=[ 718], 00:26:20.147 | 99.00th=[ 877], 99.50th=[ 877], 99.90th=[ 911], 99.95th=[ 911], 00:26:20.147 | 99.99th=[ 911] 00:26:20.147 bw ( KiB/s): min=16896, max=100662, per=5.59%, avg=53903.50, stdev=22764.99, samples=20 00:26:20.147 iops : min= 66, max= 393, avg=210.55, stdev=88.90, samples=20 00:26:20.147 lat (msec) : 10=0.05%, 20=2.58%, 50=4.15%, 100=2.35%, 250=29.72% 00:26:20.147 lat (msec) : 500=51.29%, 750=5.90%, 1000=3.96% 00:26:20.147 cpu : usr=0.06%, sys=0.95%, ctx=274, majf=0, minf=4097 00:26:20.147 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.7%, 32=1.5%, >=64=97.1% 00:26:20.147 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.147 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.147 issued rwts: total=2170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.148 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.148 job2: (groupid=0, jobs=1): err= 0: pid=390674: Sun Dec 15 05:26:31 2024 00:26:20.148 read: IOPS=170, BW=42.6MiB/s (44.7MB/s)(432MiB/10130msec) 00:26:20.148 slat (usec): min=17, max=480385, avg=3771.16, stdev=25934.27 00:26:20.148 clat (msec): min=15, max=1425, avg=370.90, stdev=347.29 00:26:20.148 lat (msec): min=15, max=1454, avg=374.67, stdev=352.03 00:26:20.148 clat percentiles (msec): 00:26:20.148 | 1.00th=[ 21], 5.00th=[ 37], 10.00th=[ 54], 20.00th=[ 95], 00:26:20.148 | 30.00th=[ 107], 40.00th=[ 123], 50.00th=[ 205], 60.00th=[ 309], 00:26:20.148 | 70.00th=[ 600], 80.00th=[ 726], 90.00th=[ 944], 95.00th=[ 1083], 00:26:20.148 | 99.00th=[ 1217], 99.50th=[ 1267], 99.90th=[ 1418], 99.95th=[ 1418], 00:26:20.148 | 99.99th=[ 1418] 00:26:20.148 bw ( KiB/s): min= 6144, max=151040, per=4.42%, avg=42624.00, stdev=39834.60, samples=20 00:26:20.148 iops : min= 24, max= 590, avg=166.50, stdev=155.60, samples=20 00:26:20.148 lat (msec) : 20=0.41%, 50=8.39%, 100=14.29%, 250=32.18%, 500=12.21% 00:26:20.148 lat (msec) : 750=14.00%, 1000=10.53%, 2000=7.99% 00:26:20.148 cpu : usr=0.06%, sys=0.62%, ctx=413, majf=0, minf=4097 00:26:20.148 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.5%, 16=0.9%, 32=1.9%, >=64=96.4% 00:26:20.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.148 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.148 issued rwts: total=1728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.148 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.148 job3: (groupid=0, jobs=1): err= 0: pid=390675: Sun Dec 15 05:26:31 2024 00:26:20.148 read: IOPS=236, BW=59.2MiB/s (62.1MB/s)(598MiB/10109msec) 00:26:20.148 slat (usec): min=16, max=224838, avg=2446.65, stdev=13890.84 00:26:20.148 clat (msec): min=2, max=1047, avg=267.55, stdev=253.56 00:26:20.148 lat (msec): min=2, max=1047, avg=270.00, stdev=256.30 00:26:20.148 clat percentiles (msec): 00:26:20.148 | 1.00th=[ 17], 5.00th=[ 27], 10.00th=[ 40], 20.00th=[ 66], 00:26:20.148 | 30.00th=[ 103], 40.00th=[ 124], 50.00th=[ 140], 60.00th=[ 199], 00:26:20.148 | 70.00th=[ 342], 80.00th=[ 523], 90.00th=[ 693], 95.00th=[ 768], 00:26:20.148 | 99.00th=[ 1011], 99.50th=[ 1020], 99.90th=[ 1045], 99.95th=[ 1045], 00:26:20.148 | 99.99th=[ 1045] 00:26:20.148 bw ( KiB/s): min=16384, max=138752, per=6.18%, avg=59648.00, stdev=43294.64, samples=20 00:26:20.148 iops : min= 64, max= 542, avg=233.00, stdev=169.12, samples=20 00:26:20.148 lat (msec) : 4=0.13%, 10=0.29%, 20=2.42%, 50=9.32%, 100=17.05% 00:26:20.148 lat (msec) : 250=34.73%, 500=15.34%, 750=14.29%, 1000=5.14%, 2000=1.30% 00:26:20.148 cpu : usr=0.07%, sys=0.90%, ctx=713, majf=0, minf=4097 00:26:20.148 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.7%, 32=1.3%, >=64=97.4% 00:26:20.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.148 issued rwts: total=2393,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.148 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.148 job4: (groupid=0, jobs=1): err= 0: pid=390677: Sun Dec 15 05:26:31 2024 00:26:20.148 read: IOPS=1270, BW=318MiB/s (333MB/s)(3181MiB/10019msec) 00:26:20.148 slat (usec): min=13, max=78684, avg=782.36, stdev=2827.55 00:26:20.148 clat (msec): min=15, max=346, avg=49.53, stdev=36.54 00:26:20.148 lat (msec): min=20, max=346, avg=50.31, stdev=37.09 00:26:20.148 clat percentiles (msec): 00:26:20.148 | 1.00th=[ 27], 5.00th=[ 30], 10.00th=[ 31], 20.00th=[ 32], 00:26:20.148 | 30.00th=[ 33], 40.00th=[ 34], 50.00th=[ 35], 60.00th=[ 37], 00:26:20.148 | 70.00th=[ 40], 80.00th=[ 64], 90.00th=[ 90], 95.00th=[ 105], 00:26:20.148 | 99.00th=[ 226], 99.50th=[ 266], 99.90th=[ 292], 99.95th=[ 292], 00:26:20.148 | 99.99th=[ 305] 00:26:20.148 bw ( KiB/s): min=65154, max=497664, per=33.59%, avg=324153.70, stdev=149864.53, samples=20 00:26:20.148 iops : min= 254, max= 1944, avg=1266.20, stdev=585.45, samples=20 00:26:20.148 lat (msec) : 20=0.02%, 50=76.08%, 100=18.04%, 250=5.26%, 500=0.61% 00:26:20.148 cpu : usr=0.39%, sys=4.84%, ctx=1630, majf=0, minf=4097 00:26:20.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:26:20.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.148 issued rwts: total=12725,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.148 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.148 job5: (groupid=0, jobs=1): err= 0: pid=390679: Sun Dec 15 05:26:31 2024 00:26:20.148 read: IOPS=283, BW=70.9MiB/s (74.3MB/s)(717MiB/10121msec) 00:26:20.148 slat (usec): min=17, max=250022, avg=1969.76, stdev=13466.52 00:26:20.148 clat (usec): min=1253, max=1410.8k, avg=223496.97, stdev=312874.00 00:26:20.148 lat (usec): min=1398, max=1410.9k, avg=225466.73, stdev=316330.63 00:26:20.148 clat percentiles (msec): 00:26:20.148 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 16], 20.00th=[ 32], 00:26:20.148 | 30.00th=[ 45], 40.00th=[ 66], 50.00th=[ 72], 60.00th=[ 86], 00:26:20.148 | 70.00th=[ 129], 80.00th=[ 426], 90.00th=[ 751], 95.00th=[ 1011], 00:26:20.148 | 99.00th=[ 1167], 99.50th=[ 1200], 99.90th=[ 1318], 99.95th=[ 1418], 00:26:20.148 | 99.99th=[ 1418] 00:26:20.148 bw ( KiB/s): min=10752, max=250880, per=7.44%, avg=71811.60, stdev=81876.05, samples=20 00:26:20.148 iops : min= 42, max= 980, avg=280.50, stdev=319.83, samples=20 00:26:20.148 lat (msec) : 2=0.31%, 4=1.15%, 10=3.52%, 20=9.34%, 50=18.37% 00:26:20.148 lat (msec) : 100=30.08%, 250=13.45%, 500=5.02%, 750=8.68%, 1000=4.98% 00:26:20.148 lat (msec) : 2000=5.09% 00:26:20.148 cpu : usr=0.12%, sys=1.06%, ctx=1728, majf=0, minf=4097 00:26:20.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:20.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.148 issued rwts: total=2869,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.148 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.148 job6: (groupid=0, jobs=1): err= 0: pid=390680: Sun Dec 15 05:26:31 2024 00:26:20.148 read: IOPS=186, BW=46.7MiB/s (49.0MB/s)(472MiB/10109msec) 00:26:20.148 slat (usec): min=19, max=354853, avg=5339.50, stdev=22333.99 00:26:20.148 clat (msec): min=13, max=961, avg=336.75, stdev=155.02 00:26:20.148 lat (msec): min=19, max=994, avg=342.09, stdev=157.17 00:26:20.148 clat percentiles (msec): 00:26:20.148 | 1.00th=[ 113], 5.00th=[ 133], 10.00th=[ 148], 20.00th=[ 215], 00:26:20.148 | 30.00th=[ 257], 40.00th=[ 292], 50.00th=[ 321], 60.00th=[ 342], 00:26:20.148 | 70.00th=[ 376], 80.00th=[ 426], 90.00th=[ 523], 95.00th=[ 651], 00:26:20.148 | 99.00th=[ 877], 99.50th=[ 902], 99.90th=[ 961], 99.95th=[ 961], 00:26:20.148 | 99.99th=[ 961] 00:26:20.148 bw ( KiB/s): min= 9728, max=94208, per=4.84%, avg=46720.00, stdev=20506.85, samples=20 00:26:20.148 iops : min= 38, max= 368, avg=182.50, stdev=80.10, samples=20 00:26:20.148 lat (msec) : 20=0.11%, 100=0.79%, 250=27.26%, 500=60.56%, 750=7.78% 00:26:20.148 lat (msec) : 1000=3.49% 00:26:20.148 cpu : usr=0.09%, sys=0.81%, ctx=241, majf=0, minf=4097 00:26:20.148 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.7%, >=64=96.7% 00:26:20.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.148 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.148 issued rwts: total=1889,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.148 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.148 job7: (groupid=0, jobs=1): err= 0: pid=390681: Sun Dec 15 05:26:31 2024 00:26:20.148 read: IOPS=767, BW=192MiB/s (201MB/s)(1924MiB/10028msec) 00:26:20.148 slat (usec): min=16, max=170979, avg=1284.06, stdev=6790.45 00:26:20.148 clat (msec): min=13, max=843, avg=82.03, stdev=137.23 00:26:20.148 lat (msec): min=13, max=843, avg=83.31, stdev=139.40 00:26:20.148 clat percentiles (msec): 00:26:20.148 | 1.00th=[ 28], 5.00th=[ 31], 10.00th=[ 32], 20.00th=[ 34], 00:26:20.148 | 30.00th=[ 35], 40.00th=[ 36], 50.00th=[ 37], 60.00th=[ 39], 00:26:20.148 | 70.00th=[ 41], 80.00th=[ 53], 90.00th=[ 167], 95.00th=[ 342], 00:26:20.148 | 99.00th=[ 726], 99.50th=[ 751], 99.90th=[ 793], 99.95th=[ 802], 00:26:20.148 | 99.99th=[ 844] 00:26:20.148 bw ( KiB/s): min=18944, max=462336, per=20.25%, avg=195379.20, stdev=190102.58, samples=20 00:26:20.148 iops : min= 74, max= 1806, avg=763.20, stdev=742.59, samples=20 00:26:20.148 lat (msec) : 20=0.05%, 50=78.78%, 100=5.02%, 250=9.53%, 500=2.31% 00:26:20.148 lat (msec) : 750=3.76%, 1000=0.56% 00:26:20.148 cpu : usr=0.29%, sys=2.93%, ctx=1069, majf=0, minf=4097 00:26:20.148 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:26:20.148 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.148 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.148 issued rwts: total=7695,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.148 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.148 job8: (groupid=0, jobs=1): err= 0: pid=390683: Sun Dec 15 05:26:31 2024 00:26:20.148 read: IOPS=139, BW=34.8MiB/s (36.5MB/s)(352MiB/10127msec) 00:26:20.148 slat (usec): min=15, max=260610, avg=6069.46, stdev=22148.99 00:26:20.148 clat (usec): min=1526, max=1403.5k, avg=453288.49, stdev=333656.85 00:26:20.148 lat (usec): min=1990, max=1403.6k, avg=459357.95, stdev=338410.71 00:26:20.148 clat percentiles (msec): 00:26:20.148 | 1.00th=[ 6], 5.00th=[ 50], 10.00th=[ 148], 20.00th=[ 180], 00:26:20.148 | 30.00th=[ 201], 40.00th=[ 230], 50.00th=[ 292], 60.00th=[ 510], 00:26:20.148 | 70.00th=[ 659], 80.00th=[ 743], 90.00th=[ 1020], 95.00th=[ 1083], 00:26:20.149 | 99.00th=[ 1217], 99.50th=[ 1250], 99.90th=[ 1401], 99.95th=[ 1401], 00:26:20.149 | 99.99th=[ 1401] 00:26:20.149 bw ( KiB/s): min=11776, max=91136, per=3.57%, avg=34432.00, stdev=25493.54, samples=20 00:26:20.149 iops : min= 46, max= 356, avg=134.50, stdev=99.58, samples=20 00:26:20.149 lat (msec) : 2=0.14%, 4=0.21%, 10=1.28%, 20=0.50%, 50=2.91% 00:26:20.149 lat (msec) : 100=0.28%, 250=40.38%, 500=14.05%, 750=20.37%, 1000=9.30% 00:26:20.149 lat (msec) : 2000=10.57% 00:26:20.149 cpu : usr=0.05%, sys=0.51%, ctx=360, majf=0, minf=4097 00:26:20.149 IO depths : 1=0.1%, 2=0.1%, 4=0.3%, 8=0.6%, 16=1.1%, 32=2.3%, >=64=95.5% 00:26:20.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.149 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.149 issued rwts: total=1409,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.149 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.149 job9: (groupid=0, jobs=1): err= 0: pid=390684: Sun Dec 15 05:26:31 2024 00:26:20.149 read: IOPS=206, BW=51.7MiB/s (54.2MB/s)(523MiB/10106msec) 00:26:20.149 slat (usec): min=18, max=296493, avg=3876.08, stdev=17974.42 00:26:20.149 clat (msec): min=25, max=971, avg=305.30, stdev=177.15 00:26:20.149 lat (msec): min=25, max=971, avg=309.18, stdev=178.68 00:26:20.149 clat percentiles (msec): 00:26:20.149 | 1.00th=[ 37], 5.00th=[ 65], 10.00th=[ 82], 20.00th=[ 186], 00:26:20.149 | 30.00th=[ 222], 40.00th=[ 255], 50.00th=[ 279], 60.00th=[ 300], 00:26:20.149 | 70.00th=[ 330], 80.00th=[ 384], 90.00th=[ 625], 95.00th=[ 726], 00:26:20.149 | 99.00th=[ 810], 99.50th=[ 810], 99.90th=[ 852], 99.95th=[ 969], 00:26:20.149 | 99.99th=[ 969] 00:26:20.149 bw ( KiB/s): min=15872, max=107520, per=5.37%, avg=51865.60, stdev=23476.96, samples=20 00:26:20.149 iops : min= 62, max= 420, avg=202.60, stdev=91.71, samples=20 00:26:20.149 lat (msec) : 50=3.11%, 100=8.76%, 250=26.70%, 500=48.90%, 750=9.67% 00:26:20.149 lat (msec) : 1000=2.87% 00:26:20.149 cpu : usr=0.06%, sys=0.88%, ctx=298, majf=0, minf=4097 00:26:20.149 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.4%, 16=0.8%, 32=1.5%, >=64=97.0% 00:26:20.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.149 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.149 issued rwts: total=2090,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.149 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.149 job10: (groupid=0, jobs=1): err= 0: pid=390685: Sun Dec 15 05:26:31 2024 00:26:20.149 read: IOPS=117, BW=29.3MiB/s (30.7MB/s)(297MiB/10122msec) 00:26:20.149 slat (usec): min=14, max=675962, avg=4537.02, stdev=30573.23 00:26:20.149 clat (usec): min=818, max=1370.6k, avg=540522.23, stdev=326559.17 00:26:20.149 lat (usec): min=841, max=1801.7k, avg=545059.25, stdev=330896.92 00:26:20.149 clat percentiles (usec): 00:26:20.149 | 1.00th=[ 1401], 5.00th=[ 99091], 10.00th=[ 162530], 00:26:20.149 | 20.00th=[ 229639], 30.00th=[ 295699], 40.00th=[ 367002], 00:26:20.149 | 50.00th=[ 501220], 60.00th=[ 633340], 70.00th=[ 717226], 00:26:20.149 | 80.00th=[ 817890], 90.00th=[1044382], 95.00th=[1115685], 00:26:20.149 | 99.00th=[1216349], 99.50th=[1266680], 99.90th=[1367344], 00:26:20.149 | 99.95th=[1367344], 99.99th=[1367344] 00:26:20.149 bw ( KiB/s): min= 4096, max=78336, per=2.98%, avg=28750.80, stdev=18988.69, samples=20 00:26:20.149 iops : min= 16, max= 306, avg=112.30, stdev=74.18, samples=20 00:26:20.149 lat (usec) : 1000=0.25% 00:26:20.149 lat (msec) : 2=1.10%, 4=0.08%, 100=3.62%, 250=17.52%, 500=27.89% 00:26:20.149 lat (msec) : 750=25.36%, 1000=9.44%, 2000=14.74% 00:26:20.149 cpu : usr=0.04%, sys=0.54%, ctx=336, majf=0, minf=4097 00:26:20.149 IO depths : 1=0.1%, 2=0.2%, 4=0.3%, 8=0.7%, 16=1.3%, 32=2.7%, >=64=94.7% 00:26:20.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:20.149 complete : 0=0.0%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:20.149 issued rwts: total=1187,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:20.149 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:20.149 00:26:20.149 Run status group 0 (all jobs): 00:26:20.149 READ: bw=942MiB/s (988MB/s), 29.3MiB/s-318MiB/s (30.7MB/s-333MB/s), io=9546MiB (10.0GB), run=10019-10130msec 00:26:20.149 00:26:20.149 Disk stats (read/write): 00:26:20.149 nvme0n1: ios=3918/0, merge=0/0, ticks=1232947/0, in_queue=1232947, util=97.29% 00:26:20.149 nvme10n1: ios=4169/0, merge=0/0, ticks=1230228/0, in_queue=1230228, util=97.43% 00:26:20.149 nvme1n1: ios=3301/0, merge=0/0, ticks=1152673/0, in_queue=1152673, util=97.76% 00:26:20.149 nvme2n1: ios=4591/0, merge=0/0, ticks=1225945/0, in_queue=1225945, util=97.90% 00:26:20.149 nvme3n1: ios=25092/0, merge=0/0, ticks=1242506/0, in_queue=1242506, util=97.95% 00:26:20.149 nvme4n1: ios=5616/0, merge=0/0, ticks=1168839/0, in_queue=1168839, util=98.24% 00:26:20.149 nvme5n1: ios=3594/0, merge=0/0, ticks=1216580/0, in_queue=1216580, util=98.42% 00:26:20.149 nvme6n1: ios=15127/0, merge=0/0, ticks=1237376/0, in_queue=1237376, util=98.53% 00:26:20.149 nvme7n1: ios=2696/0, merge=0/0, ticks=1162110/0, in_queue=1162110, util=98.97% 00:26:20.149 nvme8n1: ios=4015/0, merge=0/0, ticks=1229116/0, in_queue=1229116, util=99.10% 00:26:20.149 nvme9n1: ios=2249/0, merge=0/0, ticks=1170606/0, in_queue=1170606, util=99.22% 00:26:20.149 05:26:32 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 262144 -d 64 -t randwrite -r 10 00:26:20.149 [global] 00:26:20.149 thread=1 00:26:20.149 invalidate=1 00:26:20.149 rw=randwrite 00:26:20.149 time_based=1 00:26:20.149 runtime=10 00:26:20.149 ioengine=libaio 00:26:20.149 direct=1 00:26:20.149 bs=262144 00:26:20.149 iodepth=64 00:26:20.149 norandommap=1 00:26:20.149 numjobs=1 00:26:20.149 00:26:20.149 [job0] 00:26:20.149 filename=/dev/nvme0n1 00:26:20.149 [job1] 00:26:20.149 filename=/dev/nvme10n1 00:26:20.149 [job2] 00:26:20.149 filename=/dev/nvme1n1 00:26:20.149 [job3] 00:26:20.149 filename=/dev/nvme2n1 00:26:20.149 [job4] 00:26:20.149 filename=/dev/nvme3n1 00:26:20.149 [job5] 00:26:20.149 filename=/dev/nvme4n1 00:26:20.149 [job6] 00:26:20.149 filename=/dev/nvme5n1 00:26:20.149 [job7] 00:26:20.149 filename=/dev/nvme6n1 00:26:20.149 [job8] 00:26:20.149 filename=/dev/nvme7n1 00:26:20.149 [job9] 00:26:20.149 filename=/dev/nvme8n1 00:26:20.149 [job10] 00:26:20.149 filename=/dev/nvme9n1 00:26:20.149 Could not set queue depth (nvme0n1) 00:26:20.149 Could not set queue depth (nvme10n1) 00:26:20.149 Could not set queue depth (nvme1n1) 00:26:20.149 Could not set queue depth (nvme2n1) 00:26:20.149 Could not set queue depth (nvme3n1) 00:26:20.149 Could not set queue depth (nvme4n1) 00:26:20.149 Could not set queue depth (nvme5n1) 00:26:20.149 Could not set queue depth (nvme6n1) 00:26:20.149 Could not set queue depth (nvme7n1) 00:26:20.149 Could not set queue depth (nvme8n1) 00:26:20.149 Could not set queue depth (nvme9n1) 00:26:20.149 job0: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.149 job1: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.149 job2: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.149 job3: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.149 job4: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.149 job5: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.149 job6: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.149 job7: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.149 job8: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.149 job9: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.149 job10: (g=0): rw=randwrite, bs=(R) 256KiB-256KiB, (W) 256KiB-256KiB, (T) 256KiB-256KiB, ioengine=libaio, iodepth=64 00:26:20.149 fio-3.35 00:26:20.149 Starting 11 threads 00:26:30.136 00:26:30.136 job0: (groupid=0, jobs=1): err= 0: pid=391712: Sun Dec 15 05:26:43 2024 00:26:30.136 write: IOPS=434, BW=109MiB/s (114MB/s)(1096MiB/10082msec); 0 zone resets 00:26:30.136 slat (usec): min=21, max=95525, avg=1588.99, stdev=4836.96 00:26:30.136 clat (usec): min=616, max=502834, avg=145484.95, stdev=113330.54 00:26:30.136 lat (usec): min=655, max=502885, avg=147073.94, stdev=114618.43 00:26:30.136 clat percentiles (msec): 00:26:30.136 | 1.00th=[ 3], 5.00th=[ 8], 10.00th=[ 15], 20.00th=[ 45], 00:26:30.136 | 30.00th=[ 74], 40.00th=[ 93], 50.00th=[ 117], 60.00th=[ 150], 00:26:30.136 | 70.00th=[ 169], 80.00th=[ 245], 90.00th=[ 342], 95.00th=[ 376], 00:26:30.136 | 99.00th=[ 439], 99.50th=[ 451], 99.90th=[ 477], 99.95th=[ 493], 00:26:30.136 | 99.99th=[ 502] 00:26:30.136 bw ( KiB/s): min=43520, max=272896, per=11.32%, avg=110625.15, stdev=60499.63, samples=20 00:26:30.136 iops : min= 170, max= 1066, avg=432.10, stdev=236.34, samples=20 00:26:30.136 lat (usec) : 750=0.14%, 1000=0.11% 00:26:30.136 lat (msec) : 2=0.71%, 4=1.64%, 10=4.54%, 20=5.54%, 50=8.35% 00:26:30.136 lat (msec) : 100=22.15%, 250=37.39%, 500=19.39%, 750=0.05% 00:26:30.136 cpu : usr=1.00%, sys=1.36%, ctx=2450, majf=0, minf=1 00:26:30.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:30.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.136 issued rwts: total=0,4384,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.136 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.136 job1: (groupid=0, jobs=1): err= 0: pid=391725: Sun Dec 15 05:26:43 2024 00:26:30.136 write: IOPS=290, BW=72.6MiB/s (76.1MB/s)(738MiB/10164msec); 0 zone resets 00:26:30.136 slat (usec): min=21, max=51915, avg=3027.85, stdev=6638.98 00:26:30.136 clat (msec): min=3, max=524, avg=217.22, stdev=107.67 00:26:30.136 lat (msec): min=3, max=524, avg=220.25, stdev=108.87 00:26:30.136 clat percentiles (msec): 00:26:30.136 | 1.00th=[ 41], 5.00th=[ 67], 10.00th=[ 88], 20.00th=[ 101], 00:26:30.136 | 30.00th=[ 153], 40.00th=[ 192], 50.00th=[ 215], 60.00th=[ 232], 00:26:30.136 | 70.00th=[ 262], 80.00th=[ 305], 90.00th=[ 372], 95.00th=[ 418], 00:26:30.136 | 99.00th=[ 493], 99.50th=[ 506], 99.90th=[ 527], 99.95th=[ 527], 00:26:30.136 | 99.99th=[ 527] 00:26:30.136 bw ( KiB/s): min=32768, max=138240, per=7.57%, avg=73965.25, stdev=31918.12, samples=20 00:26:30.136 iops : min= 128, max= 540, avg=288.90, stdev=124.68, samples=20 00:26:30.136 lat (msec) : 4=0.07%, 20=0.03%, 50=1.59%, 100=18.36%, 250=46.75% 00:26:30.136 lat (msec) : 500=32.49%, 750=0.71% 00:26:30.136 cpu : usr=0.70%, sys=0.75%, ctx=1039, majf=0, minf=2 00:26:30.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:30.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.136 issued rwts: total=0,2952,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.136 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.136 job2: (groupid=0, jobs=1): err= 0: pid=391726: Sun Dec 15 05:26:43 2024 00:26:30.136 write: IOPS=435, BW=109MiB/s (114MB/s)(1105MiB/10156msec); 0 zone resets 00:26:30.136 slat (usec): min=22, max=98942, avg=1511.89, stdev=5355.62 00:26:30.136 clat (usec): min=610, max=497540, avg=145469.12, stdev=129892.79 00:26:30.136 lat (usec): min=647, max=497639, avg=146981.01, stdev=131409.70 00:26:30.136 clat percentiles (msec): 00:26:30.136 | 1.00th=[ 3], 5.00th=[ 5], 10.00th=[ 13], 20.00th=[ 35], 00:26:30.136 | 30.00th=[ 53], 40.00th=[ 67], 50.00th=[ 89], 60.00th=[ 146], 00:26:30.136 | 70.00th=[ 211], 80.00th=[ 271], 90.00th=[ 351], 95.00th=[ 397], 00:26:30.136 | 99.00th=[ 468], 99.50th=[ 481], 99.90th=[ 498], 99.95th=[ 498], 00:26:30.136 | 99.99th=[ 498] 00:26:30.136 bw ( KiB/s): min=34816, max=322560, per=11.42%, avg=111548.80, stdev=79644.55, samples=20 00:26:30.136 iops : min= 136, max= 1260, avg=435.70, stdev=311.12, samples=20 00:26:30.136 lat (usec) : 750=0.09%, 1000=0.09% 00:26:30.136 lat (msec) : 2=0.52%, 4=4.28%, 10=3.76%, 20=4.48%, 50=15.63% 00:26:30.136 lat (msec) : 100=24.03%, 250=24.43%, 500=22.69% 00:26:30.136 cpu : usr=0.93%, sys=1.43%, ctx=2870, majf=0, minf=2 00:26:30.136 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:30.136 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.136 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.137 issued rwts: total=0,4420,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.137 job3: (groupid=0, jobs=1): err= 0: pid=391727: Sun Dec 15 05:26:43 2024 00:26:30.137 write: IOPS=291, BW=73.0MiB/s (76.5MB/s)(735MiB/10066msec); 0 zone resets 00:26:30.137 slat (usec): min=26, max=127278, avg=2473.54, stdev=6751.09 00:26:30.137 clat (usec): min=1079, max=502739, avg=216644.22, stdev=113952.75 00:26:30.137 lat (usec): min=1712, max=502786, avg=219117.76, stdev=115216.47 00:26:30.137 clat percentiles (msec): 00:26:30.137 | 1.00th=[ 5], 5.00th=[ 50], 10.00th=[ 69], 20.00th=[ 109], 00:26:30.137 | 30.00th=[ 148], 40.00th=[ 167], 50.00th=[ 199], 60.00th=[ 247], 00:26:30.137 | 70.00th=[ 296], 80.00th=[ 330], 90.00th=[ 372], 95.00th=[ 405], 00:26:30.137 | 99.00th=[ 460], 99.50th=[ 481], 99.90th=[ 498], 99.95th=[ 502], 00:26:30.137 | 99.99th=[ 502] 00:26:30.137 bw ( KiB/s): min=37888, max=123904, per=7.53%, avg=73605.65, stdev=27484.17, samples=20 00:26:30.137 iops : min= 148, max= 484, avg=287.50, stdev=107.37, samples=20 00:26:30.137 lat (msec) : 2=0.07%, 4=0.58%, 10=1.43%, 20=0.17%, 50=2.93% 00:26:30.137 lat (msec) : 100=11.88%, 250=43.46%, 500=39.41%, 750=0.07% 00:26:30.137 cpu : usr=0.66%, sys=0.94%, ctx=1576, majf=0, minf=1 00:26:30.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.5%, 32=1.1%, >=64=97.9% 00:26:30.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.137 issued rwts: total=0,2938,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.137 job4: (groupid=0, jobs=1): err= 0: pid=391728: Sun Dec 15 05:26:43 2024 00:26:30.137 write: IOPS=397, BW=99.4MiB/s (104MB/s)(1010MiB/10165msec); 0 zone resets 00:26:30.137 slat (usec): min=20, max=158980, avg=1764.23, stdev=5834.35 00:26:30.137 clat (msec): min=4, max=565, avg=159.14, stdev=119.87 00:26:30.137 lat (msec): min=4, max=573, avg=160.91, stdev=121.13 00:26:30.137 clat percentiles (msec): 00:26:30.137 | 1.00th=[ 8], 5.00th=[ 16], 10.00th=[ 26], 20.00th=[ 44], 00:26:30.137 | 30.00th=[ 63], 40.00th=[ 96], 50.00th=[ 138], 60.00th=[ 178], 00:26:30.137 | 70.00th=[ 218], 80.00th=[ 255], 90.00th=[ 338], 95.00th=[ 401], 00:26:30.137 | 99.00th=[ 456], 99.50th=[ 493], 99.90th=[ 542], 99.95th=[ 558], 00:26:30.137 | 99.99th=[ 567] 00:26:30.137 bw ( KiB/s): min=40960, max=304128, per=10.42%, avg=101818.35, stdev=70492.34, samples=20 00:26:30.137 iops : min= 160, max= 1188, avg=397.70, stdev=275.37, samples=20 00:26:30.137 lat (msec) : 10=2.75%, 20=5.02%, 50=18.96%, 100=15.37%, 250=37.07% 00:26:30.137 lat (msec) : 500=20.49%, 750=0.35% 00:26:30.137 cpu : usr=0.91%, sys=1.23%, ctx=2169, majf=0, minf=1 00:26:30.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:26:30.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.137 issued rwts: total=0,4041,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.137 job5: (groupid=0, jobs=1): err= 0: pid=391729: Sun Dec 15 05:26:43 2024 00:26:30.137 write: IOPS=338, BW=84.7MiB/s (88.8MB/s)(861MiB/10165msec); 0 zone resets 00:26:30.137 slat (usec): min=19, max=55535, avg=1983.94, stdev=5623.39 00:26:30.137 clat (usec): min=1252, max=488998, avg=186809.01, stdev=129769.87 00:26:30.137 lat (usec): min=1334, max=495924, avg=188792.95, stdev=131065.57 00:26:30.137 clat percentiles (msec): 00:26:30.137 | 1.00th=[ 4], 5.00th=[ 15], 10.00th=[ 33], 20.00th=[ 51], 00:26:30.137 | 30.00th=[ 79], 40.00th=[ 129], 50.00th=[ 192], 60.00th=[ 220], 00:26:30.137 | 70.00th=[ 247], 80.00th=[ 305], 90.00th=[ 388], 95.00th=[ 426], 00:26:30.137 | 99.00th=[ 460], 99.50th=[ 468], 99.90th=[ 481], 99.95th=[ 485], 00:26:30.137 | 99.99th=[ 489] 00:26:30.137 bw ( KiB/s): min=36864, max=314484, per=8.86%, avg=86559.40, stdev=58919.78, samples=20 00:26:30.137 iops : min= 144, max= 1228, avg=338.10, stdev=230.06, samples=20 00:26:30.137 lat (msec) : 2=0.12%, 4=0.96%, 10=0.96%, 20=5.52%, 50=11.90% 00:26:30.137 lat (msec) : 100=17.42%, 250=33.54%, 500=29.59% 00:26:30.137 cpu : usr=0.81%, sys=1.08%, ctx=1782, majf=0, minf=1 00:26:30.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=0.9%, >=64=98.2% 00:26:30.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.137 issued rwts: total=0,3444,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.137 job6: (groupid=0, jobs=1): err= 0: pid=391730: Sun Dec 15 05:26:43 2024 00:26:30.137 write: IOPS=284, BW=71.0MiB/s (74.5MB/s)(716MiB/10080msec); 0 zone resets 00:26:30.137 slat (usec): min=26, max=76234, avg=3096.48, stdev=7090.75 00:26:30.137 clat (msec): min=8, max=499, avg=222.15, stdev=111.18 00:26:30.137 lat (msec): min=8, max=499, avg=225.24, stdev=112.65 00:26:30.137 clat percentiles (msec): 00:26:30.137 | 1.00th=[ 22], 5.00th=[ 79], 10.00th=[ 101], 20.00th=[ 134], 00:26:30.137 | 30.00th=[ 155], 40.00th=[ 165], 50.00th=[ 182], 60.00th=[ 232], 00:26:30.137 | 70.00th=[ 275], 80.00th=[ 334], 90.00th=[ 397], 95.00th=[ 426], 00:26:30.137 | 99.00th=[ 485], 99.50th=[ 493], 99.90th=[ 502], 99.95th=[ 502], 00:26:30.137 | 99.99th=[ 502] 00:26:30.137 bw ( KiB/s): min=38912, max=122880, per=7.33%, avg=71660.10, stdev=29960.70, samples=20 00:26:30.137 iops : min= 152, max= 480, avg=279.90, stdev=117.05, samples=20 00:26:30.137 lat (msec) : 10=0.03%, 20=0.91%, 50=1.40%, 100=7.82%, 250=53.62% 00:26:30.137 lat (msec) : 500=36.22% 00:26:30.137 cpu : usr=0.78%, sys=0.83%, ctx=1003, majf=0, minf=1 00:26:30.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.1%, >=64=97.8% 00:26:30.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.137 issued rwts: total=0,2863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.137 job7: (groupid=0, jobs=1): err= 0: pid=391731: Sun Dec 15 05:26:43 2024 00:26:30.137 write: IOPS=448, BW=112MiB/s (118MB/s)(1133MiB/10101msec); 0 zone resets 00:26:30.137 slat (usec): min=23, max=185621, avg=1649.05, stdev=6047.01 00:26:30.137 clat (msec): min=2, max=524, avg=140.93, stdev=91.55 00:26:30.137 lat (msec): min=2, max=524, avg=142.57, stdev=92.48 00:26:30.137 clat percentiles (msec): 00:26:30.137 | 1.00th=[ 7], 5.00th=[ 23], 10.00th=[ 39], 20.00th=[ 59], 00:26:30.137 | 30.00th=[ 83], 40.00th=[ 100], 50.00th=[ 122], 60.00th=[ 155], 00:26:30.137 | 70.00th=[ 184], 80.00th=[ 218], 90.00th=[ 255], 95.00th=[ 321], 00:26:30.137 | 99.00th=[ 397], 99.50th=[ 426], 99.90th=[ 506], 99.95th=[ 514], 00:26:30.137 | 99.99th=[ 527] 00:26:30.137 bw ( KiB/s): min=51200, max=285696, per=11.70%, avg=114362.65, stdev=54459.50, samples=20 00:26:30.137 iops : min= 200, max= 1116, avg=446.70, stdev=212.75, samples=20 00:26:30.137 lat (msec) : 4=0.04%, 10=2.32%, 20=2.14%, 50=11.81%, 100=23.90% 00:26:30.137 lat (msec) : 250=49.04%, 500=10.64%, 750=0.11% 00:26:30.137 cpu : usr=1.11%, sys=1.30%, ctx=2151, majf=0, minf=1 00:26:30.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:26:30.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.137 issued rwts: total=0,4531,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.137 job8: (groupid=0, jobs=1): err= 0: pid=391732: Sun Dec 15 05:26:43 2024 00:26:30.137 write: IOPS=325, BW=81.3MiB/s (85.2MB/s)(821MiB/10101msec); 0 zone resets 00:26:30.137 slat (usec): min=24, max=84846, avg=2451.30, stdev=6924.48 00:26:30.137 clat (usec): min=1274, max=607830, avg=194341.01, stdev=133257.58 00:26:30.137 lat (usec): min=1328, max=607871, avg=196792.31, stdev=135140.27 00:26:30.137 clat percentiles (msec): 00:26:30.137 | 1.00th=[ 4], 5.00th=[ 11], 10.00th=[ 23], 20.00th=[ 55], 00:26:30.137 | 30.00th=[ 102], 40.00th=[ 157], 50.00th=[ 182], 60.00th=[ 226], 00:26:30.137 | 70.00th=[ 268], 80.00th=[ 309], 90.00th=[ 376], 95.00th=[ 418], 00:26:30.137 | 99.00th=[ 550], 99.50th=[ 584], 99.90th=[ 609], 99.95th=[ 609], 00:26:30.137 | 99.99th=[ 609] 00:26:30.137 bw ( KiB/s): min=30720, max=313344, per=8.44%, avg=82432.00, stdev=62046.69, samples=20 00:26:30.137 iops : min= 120, max= 1224, avg=322.00, stdev=242.37, samples=20 00:26:30.137 lat (msec) : 2=0.21%, 4=1.28%, 10=3.50%, 20=4.02%, 50=9.05% 00:26:30.137 lat (msec) : 100=11.70%, 250=35.91%, 500=32.10%, 750=2.22% 00:26:30.137 cpu : usr=0.69%, sys=1.15%, ctx=1804, majf=0, minf=1 00:26:30.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:30.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.137 issued rwts: total=0,3283,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.137 job9: (groupid=0, jobs=1): err= 0: pid=391733: Sun Dec 15 05:26:43 2024 00:26:30.137 write: IOPS=327, BW=81.9MiB/s (85.8MB/s)(825MiB/10082msec); 0 zone resets 00:26:30.137 slat (usec): min=21, max=116409, avg=1906.60, stdev=6331.59 00:26:30.137 clat (msec): min=2, max=529, avg=193.49, stdev=132.90 00:26:30.137 lat (msec): min=2, max=529, avg=195.40, stdev=134.63 00:26:30.137 clat percentiles (msec): 00:26:30.137 | 1.00th=[ 11], 5.00th=[ 28], 10.00th=[ 37], 20.00th=[ 56], 00:26:30.137 | 30.00th=[ 81], 40.00th=[ 112], 50.00th=[ 184], 60.00th=[ 239], 00:26:30.137 | 70.00th=[ 279], 80.00th=[ 321], 90.00th=[ 388], 95.00th=[ 409], 00:26:30.137 | 99.00th=[ 489], 99.50th=[ 510], 99.90th=[ 523], 99.95th=[ 527], 00:26:30.137 | 99.99th=[ 531] 00:26:30.137 bw ( KiB/s): min=34816, max=257024, per=8.48%, avg=82898.75, stdev=49581.35, samples=20 00:26:30.137 iops : min= 136, max= 1004, avg=323.80, stdev=193.69, samples=20 00:26:30.137 lat (msec) : 4=0.12%, 10=0.76%, 20=1.42%, 50=15.45%, 100=19.93% 00:26:30.137 lat (msec) : 250=25.30%, 500=36.29%, 750=0.73% 00:26:30.137 cpu : usr=0.87%, sys=1.08%, ctx=2148, majf=0, minf=1 00:26:30.137 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.5%, 32=1.0%, >=64=98.1% 00:26:30.137 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.137 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.137 issued rwts: total=0,3301,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.137 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.137 job10: (groupid=0, jobs=1): err= 0: pid=391734: Sun Dec 15 05:26:43 2024 00:26:30.137 write: IOPS=260, BW=65.0MiB/s (68.2MB/s)(661MiB/10163msec); 0 zone resets 00:26:30.137 slat (usec): min=20, max=173810, avg=2950.18, stdev=7683.94 00:26:30.137 clat (msec): min=6, max=573, avg=242.48, stdev=110.83 00:26:30.138 lat (msec): min=6, max=578, avg=245.43, stdev=112.20 00:26:30.138 clat percentiles (msec): 00:26:30.138 | 1.00th=[ 51], 5.00th=[ 85], 10.00th=[ 99], 20.00th=[ 150], 00:26:30.138 | 30.00th=[ 186], 40.00th=[ 209], 50.00th=[ 224], 60.00th=[ 247], 00:26:30.138 | 70.00th=[ 292], 80.00th=[ 338], 90.00th=[ 405], 95.00th=[ 451], 00:26:30.138 | 99.00th=[ 542], 99.50th=[ 558], 99.90th=[ 567], 99.95th=[ 567], 00:26:30.138 | 99.99th=[ 575] 00:26:30.138 bw ( KiB/s): min=30208, max=130560, per=6.76%, avg=66073.60, stdev=24085.19, samples=20 00:26:30.138 iops : min= 118, max= 510, avg=258.10, stdev=94.08, samples=20 00:26:30.138 lat (msec) : 10=0.08%, 20=0.04%, 50=0.87%, 100=10.59%, 250=49.66% 00:26:30.138 lat (msec) : 500=36.57%, 750=2.19% 00:26:30.138 cpu : usr=0.46%, sys=0.99%, ctx=1166, majf=0, minf=1 00:26:30.138 IO depths : 1=0.1%, 2=0.1%, 4=0.2%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.6% 00:26:30.138 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:30.138 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0% 00:26:30.138 issued rwts: total=0,2644,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:30.138 latency : target=0, window=0, percentile=100.00%, depth=64 00:26:30.138 00:26:30.138 Run status group 0 (all jobs): 00:26:30.138 WRITE: bw=954MiB/s (1001MB/s), 65.0MiB/s-112MiB/s (68.2MB/s-118MB/s), io=9700MiB (10.2GB), run=10066-10165msec 00:26:30.138 00:26:30.138 Disk stats (read/write): 00:26:30.138 nvme0n1: ios=51/8570, merge=0/0, ticks=2126/1221226, in_queue=1223352, util=99.93% 00:26:30.138 nvme10n1: ios=42/5757, merge=0/0, ticks=46/1201147, in_queue=1201193, util=97.60% 00:26:30.138 nvme1n1: ios=0/8658, merge=0/0, ticks=0/1216891, in_queue=1216891, util=97.61% 00:26:30.138 nvme2n1: ios=43/5664, merge=0/0, ticks=280/1219459, in_queue=1219739, util=100.00% 00:26:30.138 nvme3n1: ios=0/7933, merge=0/0, ticks=0/1209592, in_queue=1209592, util=97.85% 00:26:30.138 nvme4n1: ios=0/6740, merge=0/0, ticks=0/1210402, in_queue=1210402, util=98.16% 00:26:30.138 nvme5n1: ios=0/5518, merge=0/0, ticks=0/1213074, in_queue=1213074, util=98.30% 00:26:30.138 nvme6n1: ios=46/8892, merge=0/0, ticks=2568/1190913, in_queue=1193481, util=100.00% 00:26:30.138 nvme7n1: ios=40/6402, merge=0/0, ticks=1381/1206577, in_queue=1207958, util=100.00% 00:26:30.138 nvme8n1: ios=0/6396, merge=0/0, ticks=0/1224386, in_queue=1224386, util=98.92% 00:26:30.138 nvme9n1: ios=42/5132, merge=0/0, ticks=4368/1196838, in_queue=1201206, util=100.00% 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@36 -- # sync 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # seq 1 11 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:26:30.138 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK1 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK1 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK1 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.138 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode2 00:26:30.398 NQN:nqn.2016-06.io.spdk:cnode2 disconnected 1 controller(s) 00:26:30.398 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK2 00:26:30.398 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.398 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.398 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK2 00:26:30.398 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.398 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK2 00:26:30.398 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:30.398 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:30.398 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.398 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.398 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.398 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.398 05:26:43 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode3 00:26:30.657 NQN:nqn.2016-06.io.spdk:cnode3 disconnected 1 controller(s) 00:26:30.657 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK3 00:26:30.657 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.657 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.657 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK3 00:26:30.657 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK3 00:26:30.657 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.657 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:30.657 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:26:30.657 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.657 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.657 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.657 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.657 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode4 00:26:30.916 NQN:nqn.2016-06.io.spdk:cnode4 disconnected 1 controller(s) 00:26:30.916 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK4 00:26:30.916 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:30.916 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:30.916 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK4 00:26:30.916 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:30.916 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK4 00:26:30.916 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:30.916 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:26:30.916 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:30.916 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:30.917 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:30.917 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:30.917 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode5 00:26:31.176 NQN:nqn.2016-06.io.spdk:cnode5 disconnected 1 controller(s) 00:26:31.176 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK5 00:26:31.176 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:31.176 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:31.176 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK5 00:26:31.176 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:31.176 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK5 00:26:31.176 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:31.176 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode5 00:26:31.176 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.176 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.176 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.176 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.176 05:26:44 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode6 00:26:31.434 NQN:nqn.2016-06.io.spdk:cnode6 disconnected 1 controller(s) 00:26:31.435 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK6 00:26:31.435 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:31.435 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:31.435 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK6 00:26:31.435 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:31.435 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK6 00:26:31.435 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:31.435 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode6 00:26:31.435 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.435 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.435 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.435 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.435 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode7 00:26:31.693 NQN:nqn.2016-06.io.spdk:cnode7 disconnected 1 controller(s) 00:26:31.693 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK7 00:26:31.693 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:31.693 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK7 00:26:31.693 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:31.693 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:31.693 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK7 00:26:31.693 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:31.693 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode7 00:26:31.693 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.693 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.693 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.693 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.693 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode8 00:26:31.951 NQN:nqn.2016-06.io.spdk:cnode8 disconnected 1 controller(s) 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK8 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK8 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK8 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode8 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode9 00:26:31.951 NQN:nqn.2016-06.io.spdk:cnode9 disconnected 1 controller(s) 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK9 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK9 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK9 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode9 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:31.951 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode10 00:26:32.210 NQN:nqn.2016-06.io.spdk:cnode10 disconnected 1 controller(s) 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK10 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK10 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK10 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode10 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@37 -- # for i in $(seq 1 $NVMF_SUBSYS) 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode11 00:26:32.210 NQN:nqn.2016-06.io.spdk:cnode11 disconnected 1 controller(s) 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@39 -- # waitforserial_disconnect SPDK11 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1223 -- # local i=0 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:26:32.210 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1224 -- # grep -q -w SPDK11 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1231 -- # grep -q -w SPDK11 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1235 -- # return 0 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode11 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@43 -- # rm -f ./local-job0-0-verify.state 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- target/multiconnection.sh@47 -- # nvmftestfini 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@516 -- # nvmfcleanup 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@121 -- # sync 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@124 -- # set +e 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@125 -- # for i in {1..20} 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:26:32.470 rmmod nvme_tcp 00:26:32.470 rmmod nvme_fabrics 00:26:32.470 rmmod nvme_keyring 00:26:32.470 05:26:45 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@128 -- # set -e 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@129 -- # return 0 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@517 -- # '[' -n 384315 ']' 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@518 -- # killprocess 384315 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@954 -- # '[' -z 384315 ']' 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@958 -- # kill -0 384315 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # uname 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 384315 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@972 -- # echo 'killing process with pid 384315' 00:26:32.470 killing process with pid 384315 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@973 -- # kill 384315 00:26:32.470 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@978 -- # wait 384315 00:26:33.038 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:26:33.038 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:26:33.038 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:26:33.038 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@297 -- # iptr 00:26:33.038 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-save 00:26:33.039 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:26:33.039 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@791 -- # iptables-restore 00:26:33.039 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:26:33.039 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@302 -- # remove_spdk_ns 00:26:33.039 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:33.039 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:33.039 05:26:46 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:34.945 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:26:34.945 00:26:34.945 real 1m10.603s 00:26:34.945 user 4m16.285s 00:26:34.945 sys 0m16.555s 00:26:34.945 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:34.945 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_multiconnection -- common/autotest_common.sh@10 -- # set +x 00:26:34.945 ************************************ 00:26:34.945 END TEST nvmf_multiconnection 00:26:34.945 ************************************ 00:26:34.945 05:26:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@50 -- # run_test nvmf_initiator_timeout /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:34.945 05:26:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:34.945 05:26:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:34.945 05:26:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:26:34.945 ************************************ 00:26:34.945 START TEST nvmf_initiator_timeout 00:26:34.945 ************************************ 00:26:34.945 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/initiator_timeout.sh --transport=tcp 00:26:35.205 * Looking for test storage... 00:26:35.205 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lcov --version 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # IFS=.-: 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@336 -- # read -ra ver1 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # IFS=.-: 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@337 -- # read -ra ver2 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@338 -- # local 'op=<' 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@340 -- # ver1_l=2 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@341 -- # ver2_l=1 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@344 -- # case "$op" in 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@345 -- # : 1 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # decimal 1 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=1 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 1 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@365 -- # ver1[v]=1 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # decimal 2 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@353 -- # local d=2 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@355 -- # echo 2 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@366 -- # ver2[v]=2 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@368 -- # return 0 00:26:35.205 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:26:35.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.206 --rc genhtml_branch_coverage=1 00:26:35.206 --rc genhtml_function_coverage=1 00:26:35.206 --rc genhtml_legend=1 00:26:35.206 --rc geninfo_all_blocks=1 00:26:35.206 --rc geninfo_unexecuted_blocks=1 00:26:35.206 00:26:35.206 ' 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:26:35.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.206 --rc genhtml_branch_coverage=1 00:26:35.206 --rc genhtml_function_coverage=1 00:26:35.206 --rc genhtml_legend=1 00:26:35.206 --rc geninfo_all_blocks=1 00:26:35.206 --rc geninfo_unexecuted_blocks=1 00:26:35.206 00:26:35.206 ' 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:26:35.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.206 --rc genhtml_branch_coverage=1 00:26:35.206 --rc genhtml_function_coverage=1 00:26:35.206 --rc genhtml_legend=1 00:26:35.206 --rc geninfo_all_blocks=1 00:26:35.206 --rc geninfo_unexecuted_blocks=1 00:26:35.206 00:26:35.206 ' 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:26:35.206 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:35.206 --rc genhtml_branch_coverage=1 00:26:35.206 --rc genhtml_function_coverage=1 00:26:35.206 --rc genhtml_legend=1 00:26:35.206 --rc geninfo_all_blocks=1 00:26:35.206 --rc geninfo_unexecuted_blocks=1 00:26:35.206 00:26:35.206 ' 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # uname -s 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@15 -- # shopt -s extglob 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@5 -- # export PATH 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@51 -- # : 0 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:26:35.206 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@55 -- # have_pci_nics=0 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@14 -- # nvmftestinit 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@476 -- # prepare_net_devs 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@438 -- # local -g is_hw=no 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@440 -- # remove_spdk_ns 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@309 -- # xtrace_disable 00:26:35.206 05:26:48 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # pci_devs=() 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@315 -- # local -a pci_devs 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # pci_net_devs=() 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # pci_drivers=() 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@317 -- # local -A pci_drivers 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # net_devs=() 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@319 -- # local -ga net_devs 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # e810=() 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@320 -- # local -ga e810 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # x722=() 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@321 -- # local -ga x722 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # mlx=() 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@322 -- # local -ga mlx 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:26:41.778 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:26:41.779 Found 0000:af:00.0 (0x8086 - 0x159b) 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:26:41.779 Found 0000:af:00.1 (0x8086 - 0x159b) 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:26:41.779 Found net devices under 0000:af:00.0: cvl_0_0 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@418 -- # [[ up == up ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:26:41.779 Found net devices under 0000:af:00.1: cvl_0_1 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@442 -- # is_hw=yes 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:26:41.779 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:41.779 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.302 ms 00:26:41.779 00:26:41.779 --- 10.0.0.2 ping statistics --- 00:26:41.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.779 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:41.779 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:41.779 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:26:41.779 00:26:41.779 --- 10.0.0.1 ping statistics --- 00:26:41.779 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:41.779 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@450 -- # return 0 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@15 -- # nvmfappstart -m 0xF 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@509 -- # nvmfpid=396820 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@510 -- # waitforlisten 396820 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@835 -- # '[' -z 396820 ']' 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:41.779 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:41.779 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.779 [2024-12-15 05:26:54.738080] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:26:41.779 [2024-12-15 05:26:54.738128] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:41.780 [2024-12-15 05:26:54.814387] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:41.780 [2024-12-15 05:26:54.837398] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:41.780 [2024-12-15 05:26:54.837436] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:41.780 [2024-12-15 05:26:54.837443] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:41.780 [2024-12-15 05:26:54.837450] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:41.780 [2024-12-15 05:26:54.837456] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:41.780 [2024-12-15 05:26:54.838839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:26:41.780 [2024-12-15 05:26:54.838946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:26:41.780 [2024-12-15 05:26:54.839055] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:41.780 [2024-12-15 05:26:54.839056] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:26:41.780 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:41.780 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@868 -- # return 0 00:26:41.780 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:26:41.780 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:41.780 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.780 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:41.780 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@17 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $nvmfpid; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:26:41.780 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:41.780 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.780 05:26:54 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.780 Malloc0 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@22 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 30 -t 30 -w 30 -n 30 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.780 Delay0 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.780 [2024-12-15 05:26:55.035367] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:41.780 [2024-12-15 05:26:55.068587] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:41.780 05:26:55 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@29 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:26:42.717 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@31 -- # waitforserial SPDKISFASTANDAWESOME 00:26:42.717 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1202 -- # local i=0 00:26:42.717 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:26:42.717 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:26:42.717 05:26:56 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1209 -- # sleep 2 00:26:44.622 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:26:44.622 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:26:44.622 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:26:44.622 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:26:44.622 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:26:44.622 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1212 -- # return 0 00:26:44.622 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@35 -- # fio_pid=397513 00:26:44.622 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 60 -v 00:26:44.622 05:26:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@37 -- # sleep 3 00:26:44.622 [global] 00:26:44.622 thread=1 00:26:44.622 invalidate=1 00:26:44.622 rw=write 00:26:44.622 time_based=1 00:26:44.622 runtime=60 00:26:44.622 ioengine=libaio 00:26:44.622 direct=1 00:26:44.622 bs=4096 00:26:44.622 iodepth=1 00:26:44.622 norandommap=0 00:26:44.622 numjobs=1 00:26:44.622 00:26:44.622 verify_dump=1 00:26:44.622 verify_backlog=512 00:26:44.622 verify_state_save=0 00:26:44.622 do_verify=1 00:26:44.622 verify=crc32c-intel 00:26:44.622 [job0] 00:26:44.622 filename=/dev/nvme0n1 00:26:44.880 Could not set queue depth (nvme0n1) 00:26:45.138 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:26:45.138 fio-3.35 00:26:45.138 Starting 1 thread 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@40 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 31000000 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:47.671 true 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@41 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 31000000 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:47.671 true 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@42 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 31000000 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:47.671 true 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@43 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 310000000 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:47.671 true 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:47.671 05:27:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@45 -- # sleep 3 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@48 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_read 30 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.959 true 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@49 -- # rpc_cmd bdev_delay_update_latency Delay0 avg_write 30 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.959 true 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@50 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_read 30 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.959 true 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@51 -- # rpc_cmd bdev_delay_update_latency Delay0 p99_write 30 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:26:50.959 true 00:26:50.959 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:50.960 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@53 -- # fio_status=0 00:26:50.960 05:27:04 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@54 -- # wait 397513 00:27:47.198 00:27:47.198 job0: (groupid=0, jobs=1): err= 0: pid=397635: Sun Dec 15 05:27:58 2024 00:27:47.198 read: IOPS=7, BW=30.3KiB/s (31.0kB/s)(1816KiB/60030msec) 00:27:47.198 slat (usec): min=11, max=14781, avg=55.58, stdev=692.67 00:27:47.198 clat (usec): min=463, max=41372k, avg=131936.47, stdev=1939750.89 00:27:47.198 lat (usec): min=497, max=41372k, avg=131992.06, stdev=1939749.45 00:27:47.198 clat percentiles (msec): 00:27:47.198 | 1.00th=[ 41], 5.00th=[ 41], 10.00th=[ 42], 20.00th=[ 42], 00:27:47.198 | 30.00th=[ 42], 40.00th=[ 42], 50.00th=[ 42], 60.00th=[ 42], 00:27:47.198 | 70.00th=[ 42], 80.00th=[ 42], 90.00th=[ 42], 95.00th=[ 42], 00:27:47.198 | 99.00th=[ 43], 99.50th=[ 43], 99.90th=[17113], 99.95th=[17113], 00:27:47.198 | 99.99th=[17113] 00:27:47.198 write: IOPS=8, BW=34.1KiB/s (34.9kB/s)(2048KiB/60030msec); 0 zone resets 00:27:47.198 slat (nsec): min=8821, max=43734, avg=10011.44, stdev=2041.03 00:27:47.198 clat (usec): min=156, max=397, avg=186.70, stdev=15.49 00:27:47.198 lat (usec): min=165, max=441, avg=196.71, stdev=16.57 00:27:47.198 clat percentiles (usec): 00:27:47.198 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:27:47.198 | 30.00th=[ 182], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 190], 00:27:47.198 | 70.00th=[ 194], 80.00th=[ 196], 90.00th=[ 202], 95.00th=[ 208], 00:27:47.198 | 99.00th=[ 219], 99.50th=[ 221], 99.90th=[ 400], 99.95th=[ 400], 00:27:47.198 | 99.99th=[ 400] 00:27:47.198 bw ( KiB/s): min= 4096, max= 4096, per=100.00%, avg=4096.00, stdev= 0.00, samples=1 00:27:47.198 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:27:47.198 lat (usec) : 250=52.90%, 500=0.21% 00:27:47.198 lat (msec) : 50=46.79%, >=2000=0.10% 00:27:47.198 cpu : usr=0.02%, sys=0.02%, ctx=967, majf=0, minf=1 00:27:47.198 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:27:47.198 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.198 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:27:47.198 issued rwts: total=454,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:27:47.198 latency : target=0, window=0, percentile=100.00%, depth=1 00:27:47.198 00:27:47.198 Run status group 0 (all jobs): 00:27:47.198 READ: bw=30.3KiB/s (31.0kB/s), 30.3KiB/s-30.3KiB/s (31.0kB/s-31.0kB/s), io=1816KiB (1860kB), run=60030-60030msec 00:27:47.198 WRITE: bw=34.1KiB/s (34.9kB/s), 34.1KiB/s-34.1KiB/s (34.9kB/s-34.9kB/s), io=2048KiB (2097kB), run=60030-60030msec 00:27:47.198 00:27:47.198 Disk stats (read/write): 00:27:47.198 nvme0n1: ios=549/512, merge=0/0, ticks=19577/80, in_queue=19657, util=99.80% 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@56 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:27:47.198 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@57 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1223 -- # local i=0 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1235 -- # return 0 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@59 -- # '[' 0 -eq 0 ']' 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@60 -- # echo 'nvmf hotplug test: fio successful as expected' 00:27:47.198 nvmf hotplug test: fio successful as expected 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.198 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@69 -- # rm -f ./local-job0-0-verify.state 00:27:47.199 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@71 -- # trap - SIGINT SIGTERM EXIT 00:27:47.199 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- target/initiator_timeout.sh@73 -- # nvmftestfini 00:27:47.199 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@516 -- # nvmfcleanup 00:27:47.199 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@121 -- # sync 00:27:47.199 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:27:47.199 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@124 -- # set +e 00:27:47.199 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@125 -- # for i in {1..20} 00:27:47.199 05:27:58 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:27:47.199 rmmod nvme_tcp 00:27:47.199 rmmod nvme_fabrics 00:27:47.199 rmmod nvme_keyring 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@128 -- # set -e 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@129 -- # return 0 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@517 -- # '[' -n 396820 ']' 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@518 -- # killprocess 396820 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@954 -- # '[' -z 396820 ']' 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@958 -- # kill -0 396820 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # uname 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 396820 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@972 -- # echo 'killing process with pid 396820' 00:27:47.199 killing process with pid 396820 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@973 -- # kill 396820 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@978 -- # wait 396820 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@297 -- # iptr 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-save 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@791 -- # iptables-restore 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@302 -- # remove_spdk_ns 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:47.199 05:27:59 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:47.768 05:28:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:27:47.768 00:27:47.768 real 1m12.718s 00:27:47.768 user 4m23.410s 00:27:47.768 sys 0m6.183s 00:27:47.768 05:28:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:47.768 05:28:01 nvmf_tcp.nvmf_target_extra.nvmf_initiator_timeout -- common/autotest_common.sh@10 -- # set +x 00:27:47.768 ************************************ 00:27:47.768 END TEST nvmf_initiator_timeout 00:27:47.768 ************************************ 00:27:47.768 05:28:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:27:47.768 05:28:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:27:47.768 05:28:01 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:27:47.768 05:28:01 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@309 -- # xtrace_disable 00:27:47.768 05:28:01 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # pci_devs=() 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # net_devs=() 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # e810=() 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # local -ga e810 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # x722=() 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # local -ga x722 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # mlx=() 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@322 -- # local -ga mlx 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:54.340 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:54.341 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:54.341 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:54.341 Found net devices under 0000:af:00.0: cvl_0_0 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:54.341 Found net devices under 0000:af:00.1: cvl_0_1 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:27:54.341 ************************************ 00:27:54.341 START TEST nvmf_perf_adq 00:27:54.341 ************************************ 00:27:54.341 05:28:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:27:54.341 * Looking for test storage... 00:27:54.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lcov --version 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:27:54.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.341 --rc genhtml_branch_coverage=1 00:27:54.341 --rc genhtml_function_coverage=1 00:27:54.341 --rc genhtml_legend=1 00:27:54.341 --rc geninfo_all_blocks=1 00:27:54.341 --rc geninfo_unexecuted_blocks=1 00:27:54.341 00:27:54.341 ' 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:27:54.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.341 --rc genhtml_branch_coverage=1 00:27:54.341 --rc genhtml_function_coverage=1 00:27:54.341 --rc genhtml_legend=1 00:27:54.341 --rc geninfo_all_blocks=1 00:27:54.341 --rc geninfo_unexecuted_blocks=1 00:27:54.341 00:27:54.341 ' 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:27:54.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.341 --rc genhtml_branch_coverage=1 00:27:54.341 --rc genhtml_function_coverage=1 00:27:54.341 --rc genhtml_legend=1 00:27:54.341 --rc geninfo_all_blocks=1 00:27:54.341 --rc geninfo_unexecuted_blocks=1 00:27:54.341 00:27:54.341 ' 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:27:54.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:54.341 --rc genhtml_branch_coverage=1 00:27:54.341 --rc genhtml_function_coverage=1 00:27:54.341 --rc genhtml_legend=1 00:27:54.341 --rc geninfo_all_blocks=1 00:27:54.341 --rc geninfo_unexecuted_blocks=1 00:27:54.341 00:27:54.341 ' 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:54.341 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # : 0 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:27:54.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@55 -- # have_pci_nics=0 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:27:54.342 05:28:07 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:27:59.616 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:27:59.617 Found 0000:af:00.0 (0x8086 - 0x159b) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:27:59.617 Found 0000:af:00.1 (0x8086 - 0x159b) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:27:59.617 Found net devices under 0000:af:00.0: cvl_0_0 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:27:59.617 Found net devices under 0000:af:00.1: cvl_0_1 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:27:59.617 05:28:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:00.554 05:28:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:03.847 05:28:17 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:09.122 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:09.123 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:09.123 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:09.123 Found net devices under 0000:af:00.0: cvl_0_0 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:09.123 Found net devices under 0000:af:00.1: cvl_0_1 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:09.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:09.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.936 ms 00:28:09.123 00:28:09.123 --- 10.0.0.2 ping statistics --- 00:28:09.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.123 rtt min/avg/max/mdev = 0.936/0.936/0.936/0.000 ms 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:09.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:09.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.228 ms 00:28:09.123 00:28:09.123 --- 10.0.0.1 ping statistics --- 00:28:09.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:09.123 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:09.123 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:09.124 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:09.124 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.124 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=415868 00:28:09.124 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 415868 00:28:09.124 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:09.124 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 415868 ']' 00:28:09.124 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:09.124 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:09.124 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:09.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:09.124 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:09.124 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.124 [2024-12-15 05:28:22.768458] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:09.124 [2024-12-15 05:28:22.768508] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:09.382 [2024-12-15 05:28:22.845894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:09.382 [2024-12-15 05:28:22.869534] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:09.382 [2024-12-15 05:28:22.869571] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:09.382 [2024-12-15 05:28:22.869578] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:09.382 [2024-12-15 05:28:22.869584] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:09.382 [2024-12-15 05:28:22.869589] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:09.382 [2024-12-15 05:28:22.870906] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:09.382 [2024-12-15 05:28:22.871030] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:09.382 [2024-12-15 05:28:22.871079] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:09.382 [2024-12-15 05:28:22.871080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.382 05:28:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.382 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.382 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:09.382 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.382 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.639 [2024-12-15 05:28:23.084473] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.639 Malloc1 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:09.639 [2024-12-15 05:28:23.145646] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:09.639 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=415925 00:28:09.640 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:28:09.640 05:28:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:11.536 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:28:11.536 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:11.536 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:11.536 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:11.536 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:28:11.536 "tick_rate": 2100000000, 00:28:11.536 "poll_groups": [ 00:28:11.536 { 00:28:11.536 "name": "nvmf_tgt_poll_group_000", 00:28:11.536 "admin_qpairs": 1, 00:28:11.536 "io_qpairs": 1, 00:28:11.536 "current_admin_qpairs": 1, 00:28:11.536 "current_io_qpairs": 1, 00:28:11.536 "pending_bdev_io": 0, 00:28:11.536 "completed_nvme_io": 19943, 00:28:11.536 "transports": [ 00:28:11.536 { 00:28:11.536 "trtype": "TCP" 00:28:11.536 } 00:28:11.536 ] 00:28:11.536 }, 00:28:11.536 { 00:28:11.536 "name": "nvmf_tgt_poll_group_001", 00:28:11.536 "admin_qpairs": 0, 00:28:11.536 "io_qpairs": 1, 00:28:11.536 "current_admin_qpairs": 0, 00:28:11.536 "current_io_qpairs": 1, 00:28:11.536 "pending_bdev_io": 0, 00:28:11.536 "completed_nvme_io": 19795, 00:28:11.536 "transports": [ 00:28:11.536 { 00:28:11.536 "trtype": "TCP" 00:28:11.536 } 00:28:11.536 ] 00:28:11.536 }, 00:28:11.536 { 00:28:11.536 "name": "nvmf_tgt_poll_group_002", 00:28:11.536 "admin_qpairs": 0, 00:28:11.536 "io_qpairs": 1, 00:28:11.536 "current_admin_qpairs": 0, 00:28:11.536 "current_io_qpairs": 1, 00:28:11.536 "pending_bdev_io": 0, 00:28:11.536 "completed_nvme_io": 20134, 00:28:11.536 "transports": [ 00:28:11.536 { 00:28:11.536 "trtype": "TCP" 00:28:11.536 } 00:28:11.536 ] 00:28:11.536 }, 00:28:11.536 { 00:28:11.536 "name": "nvmf_tgt_poll_group_003", 00:28:11.536 "admin_qpairs": 0, 00:28:11.536 "io_qpairs": 1, 00:28:11.536 "current_admin_qpairs": 0, 00:28:11.536 "current_io_qpairs": 1, 00:28:11.536 "pending_bdev_io": 0, 00:28:11.536 "completed_nvme_io": 20279, 00:28:11.536 "transports": [ 00:28:11.536 { 00:28:11.536 "trtype": "TCP" 00:28:11.536 } 00:28:11.536 ] 00:28:11.536 } 00:28:11.536 ] 00:28:11.536 }' 00:28:11.536 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:28:11.536 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:28:11.793 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:28:11.793 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:28:11.793 05:28:25 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 415925 00:28:19.891 Initializing NVMe Controllers 00:28:19.891 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:19.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:19.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:19.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:19.891 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:19.891 Initialization complete. Launching workers. 00:28:19.891 ======================================================== 00:28:19.891 Latency(us) 00:28:19.891 Device Information : IOPS MiB/s Average min max 00:28:19.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10631.80 41.53 6020.81 2357.30 10668.75 00:28:19.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10580.10 41.33 6050.00 2169.56 10171.00 00:28:19.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10703.50 41.81 5979.47 2371.99 10350.40 00:28:19.891 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10577.90 41.32 6050.84 2333.18 10168.46 00:28:19.891 ======================================================== 00:28:19.891 Total : 42493.30 165.99 6025.14 2169.56 10668.75 00:28:19.891 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:19.891 rmmod nvme_tcp 00:28:19.891 rmmod nvme_fabrics 00:28:19.891 rmmod nvme_keyring 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 415868 ']' 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 415868 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 415868 ']' 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 415868 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 415868 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 415868' 00:28:19.891 killing process with pid 415868 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 415868 00:28:19.891 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 415868 00:28:20.150 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:20.150 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:20.150 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:20.150 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:20.150 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:20.150 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:20.150 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:20.150 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:20.150 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:20.150 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:20.150 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:20.150 05:28:33 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:22.068 05:28:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:22.068 05:28:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:28:22.068 05:28:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:28:22.068 05:28:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:28:23.449 05:28:36 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:28:25.989 05:28:39 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # xtrace_disable 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # pci_devs=() 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # net_devs=() 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # e810=() 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # local -ga e810 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # x722=() 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # local -ga x722 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # mlx=() 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@322 -- # local -ga mlx 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:31.268 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:31.269 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:31.269 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:31.269 Found net devices under 0000:af:00.0: cvl_0_0 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:31.269 Found net devices under 0000:af:00.1: cvl_0_1 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@442 -- # is_hw=yes 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:31.269 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:31.269 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.766 ms 00:28:31.269 00:28:31.269 --- 10.0.0.2 ping statistics --- 00:28:31.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.269 rtt min/avg/max/mdev = 0.766/0.766/0.766/0.000 ms 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:31.269 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:31.269 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.214 ms 00:28:31.269 00:28:31.269 --- 10.0.0.1 ping statistics --- 00:28:31.269 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:31.269 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # return 0 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:28:31.269 net.core.busy_poll = 1 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:28:31.269 net.core.busy_read = 1 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:28:31.269 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:28:31.529 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:28:31.529 05:28:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@509 -- # nvmfpid=419914 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@510 -- # waitforlisten 419914 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 419914 ']' 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:31.529 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:31.529 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.529 [2024-12-15 05:28:45.105962] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:31.529 [2024-12-15 05:28:45.106012] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:31.529 [2024-12-15 05:28:45.185024] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:31.529 [2024-12-15 05:28:45.208063] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:31.529 [2024-12-15 05:28:45.208101] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:31.529 [2024-12-15 05:28:45.208110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:31.529 [2024-12-15 05:28:45.208117] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:31.529 [2024-12-15 05:28:45.208122] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:31.529 [2024-12-15 05:28:45.209526] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:31.529 [2024-12-15 05:28:45.209562] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:31.529 [2024-12-15 05:28:45.209672] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:31.529 [2024-12-15 05:28:45.209673] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.788 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.789 [2024-12-15 05:28:45.422773] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.789 Malloc1 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:31.789 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.048 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.048 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:32.048 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:32.048 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:32.048 [2024-12-15 05:28:45.482741] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:32.048 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:32.048 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=419944 00:28:32.048 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:28:32.048 05:28:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:33.954 05:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:28:33.954 05:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:33.954 05:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:33.954 05:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:33.954 05:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:28:33.954 "tick_rate": 2100000000, 00:28:33.954 "poll_groups": [ 00:28:33.954 { 00:28:33.954 "name": "nvmf_tgt_poll_group_000", 00:28:33.954 "admin_qpairs": 1, 00:28:33.954 "io_qpairs": 1, 00:28:33.954 "current_admin_qpairs": 1, 00:28:33.954 "current_io_qpairs": 1, 00:28:33.954 "pending_bdev_io": 0, 00:28:33.954 "completed_nvme_io": 25771, 00:28:33.954 "transports": [ 00:28:33.954 { 00:28:33.954 "trtype": "TCP" 00:28:33.954 } 00:28:33.954 ] 00:28:33.954 }, 00:28:33.954 { 00:28:33.954 "name": "nvmf_tgt_poll_group_001", 00:28:33.954 "admin_qpairs": 0, 00:28:33.954 "io_qpairs": 3, 00:28:33.954 "current_admin_qpairs": 0, 00:28:33.954 "current_io_qpairs": 3, 00:28:33.954 "pending_bdev_io": 0, 00:28:33.954 "completed_nvme_io": 29727, 00:28:33.954 "transports": [ 00:28:33.954 { 00:28:33.954 "trtype": "TCP" 00:28:33.954 } 00:28:33.954 ] 00:28:33.954 }, 00:28:33.954 { 00:28:33.954 "name": "nvmf_tgt_poll_group_002", 00:28:33.954 "admin_qpairs": 0, 00:28:33.954 "io_qpairs": 0, 00:28:33.954 "current_admin_qpairs": 0, 00:28:33.954 "current_io_qpairs": 0, 00:28:33.954 "pending_bdev_io": 0, 00:28:33.954 "completed_nvme_io": 0, 00:28:33.954 "transports": [ 00:28:33.954 { 00:28:33.954 "trtype": "TCP" 00:28:33.954 } 00:28:33.954 ] 00:28:33.954 }, 00:28:33.954 { 00:28:33.954 "name": "nvmf_tgt_poll_group_003", 00:28:33.954 "admin_qpairs": 0, 00:28:33.954 "io_qpairs": 0, 00:28:33.954 "current_admin_qpairs": 0, 00:28:33.954 "current_io_qpairs": 0, 00:28:33.954 "pending_bdev_io": 0, 00:28:33.954 "completed_nvme_io": 0, 00:28:33.954 "transports": [ 00:28:33.954 { 00:28:33.954 "trtype": "TCP" 00:28:33.954 } 00:28:33.954 ] 00:28:33.954 } 00:28:33.954 ] 00:28:33.954 }' 00:28:33.954 05:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:28:33.954 05:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:28:33.954 05:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:28:33.954 05:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:28:33.954 05:28:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 419944 00:28:42.076 Initializing NVMe Controllers 00:28:42.076 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:42.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:28:42.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:28:42.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:28:42.076 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:28:42.076 Initialization complete. Launching workers. 00:28:42.076 ======================================================== 00:28:42.076 Latency(us) 00:28:42.076 Device Information : IOPS MiB/s Average min max 00:28:42.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 14891.40 58.17 4297.13 1828.86 45228.01 00:28:42.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 4903.00 19.15 13095.35 1521.70 61095.01 00:28:42.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5116.00 19.98 12510.48 1785.70 60774.76 00:28:42.076 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5036.10 19.67 12706.23 1733.23 59753.84 00:28:42.076 ======================================================== 00:28:42.076 Total : 29946.50 116.98 8554.93 1521.70 61095.01 00:28:42.076 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # sync 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set +e 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:42.076 rmmod nvme_tcp 00:28:42.076 rmmod nvme_fabrics 00:28:42.076 rmmod nvme_keyring 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@128 -- # set -e 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # return 0 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@517 -- # '[' -n 419914 ']' 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@518 -- # killprocess 419914 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 419914 ']' 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 419914 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:42.076 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 419914 00:28:42.335 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:42.335 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:42.335 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 419914' 00:28:42.335 killing process with pid 419914 00:28:42.335 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 419914 00:28:42.335 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 419914 00:28:42.335 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:42.335 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:42.335 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:42.336 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # iptr 00:28:42.336 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-save 00:28:42.336 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:42.336 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@791 -- # iptables-restore 00:28:42.336 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:42.336 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:42.336 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:42.336 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:42.336 05:28:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:28:44.974 00:28:44.974 real 0m51.156s 00:28:44.974 user 2m43.977s 00:28:44.974 sys 0m11.349s 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:28:44.974 ************************************ 00:28:44.974 END TEST nvmf_perf_adq 00:28:44.974 ************************************ 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:28:44.974 ************************************ 00:28:44.974 START TEST nvmf_shutdown 00:28:44.974 ************************************ 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:28:44.974 * Looking for test storage... 00:28:44.974 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:28:44.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.974 --rc genhtml_branch_coverage=1 00:28:44.974 --rc genhtml_function_coverage=1 00:28:44.974 --rc genhtml_legend=1 00:28:44.974 --rc geninfo_all_blocks=1 00:28:44.974 --rc geninfo_unexecuted_blocks=1 00:28:44.974 00:28:44.974 ' 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:28:44.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.974 --rc genhtml_branch_coverage=1 00:28:44.974 --rc genhtml_function_coverage=1 00:28:44.974 --rc genhtml_legend=1 00:28:44.974 --rc geninfo_all_blocks=1 00:28:44.974 --rc geninfo_unexecuted_blocks=1 00:28:44.974 00:28:44.974 ' 00:28:44.974 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:28:44.974 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.974 --rc genhtml_branch_coverage=1 00:28:44.975 --rc genhtml_function_coverage=1 00:28:44.975 --rc genhtml_legend=1 00:28:44.975 --rc geninfo_all_blocks=1 00:28:44.975 --rc geninfo_unexecuted_blocks=1 00:28:44.975 00:28:44.975 ' 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:28:44.975 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:44.975 --rc genhtml_branch_coverage=1 00:28:44.975 --rc genhtml_function_coverage=1 00:28:44.975 --rc genhtml_legend=1 00:28:44.975 --rc geninfo_all_blocks=1 00:28:44.975 --rc geninfo_unexecuted_blocks=1 00:28:44.975 00:28:44.975 ' 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # : 0 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:28:44.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@55 -- # have_pci_nics=0 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:28:44.975 ************************************ 00:28:44.975 START TEST nvmf_shutdown_tc1 00:28:44.975 ************************************ 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@476 -- # prepare_net_devs 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # xtrace_disable 00:28:44.975 05:28:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # pci_devs=() 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # local -a pci_devs 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # pci_drivers=() 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # net_devs=() 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@319 -- # local -ga net_devs 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # e810=() 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # local -ga e810 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # x722=() 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # local -ga x722 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # mlx=() 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@322 -- # local -ga mlx 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:28:50.440 Found 0000:af:00.0 (0x8086 - 0x159b) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:28:50.440 Found 0000:af:00.1 (0x8086 - 0x159b) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:28:50.440 Found net devices under 0000:af:00.0: cvl_0_0 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:28:50.440 Found net devices under 0000:af:00.1: cvl_0_1 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:28:50.440 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@442 -- # is_hw=yes 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:28:50.441 05:29:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:50.441 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:50.441 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:50.441 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:28:50.441 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:28:50.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:50.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.411 ms 00:28:50.741 00:28:50.741 --- 10.0.0.2 ping statistics --- 00:28:50.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.741 rtt min/avg/max/mdev = 0.411/0.411/0.411/0.000 ms 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:50.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:50.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.218 ms 00:28:50.741 00:28:50.741 --- 10.0.0.1 ping statistics --- 00:28:50.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:50.741 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # return 0 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@509 -- # nvmfpid=425079 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@510 -- # waitforlisten 425079 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 425079 ']' 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:50.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:50.741 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:50.741 [2024-12-15 05:29:04.283305] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:50.741 [2024-12-15 05:29:04.283359] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:50.741 [2024-12-15 05:29:04.360941] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:50.741 [2024-12-15 05:29:04.384418] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:50.741 [2024-12-15 05:29:04.384459] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:50.741 [2024-12-15 05:29:04.384468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:50.741 [2024-12-15 05:29:04.384475] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:50.741 [2024-12-15 05:29:04.384481] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:50.741 [2024-12-15 05:29:04.385847] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:28:50.741 [2024-12-15 05:29:04.385876] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:28:50.741 [2024-12-15 05:29:04.385982] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:28:50.741 [2024-12-15 05:29:04.385983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:51.033 [2024-12-15 05:29:04.525826] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:51.033 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.034 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:51.034 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.034 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:51.034 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:28:51.034 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:28:51.034 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:28:51.034 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:51.034 05:29:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:51.034 Malloc1 00:28:51.034 [2024-12-15 05:29:04.640755] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:51.034 Malloc2 00:28:51.034 Malloc3 00:28:51.322 Malloc4 00:28:51.322 Malloc5 00:28:51.322 Malloc6 00:28:51.322 Malloc7 00:28:51.322 Malloc8 00:28:51.322 Malloc9 00:28:51.604 Malloc10 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=425351 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 425351 /var/tmp/bdevperf.sock 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 425351 ']' 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:51.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.604 { 00:28:51.604 "params": { 00:28:51.604 "name": "Nvme$subsystem", 00:28:51.604 "trtype": "$TEST_TRANSPORT", 00:28:51.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.604 "adrfam": "ipv4", 00:28:51.604 "trsvcid": "$NVMF_PORT", 00:28:51.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.604 "hdgst": ${hdgst:-false}, 00:28:51.604 "ddgst": ${ddgst:-false} 00:28:51.604 }, 00:28:51.604 "method": "bdev_nvme_attach_controller" 00:28:51.604 } 00:28:51.604 EOF 00:28:51.604 )") 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.604 { 00:28:51.604 "params": { 00:28:51.604 "name": "Nvme$subsystem", 00:28:51.604 "trtype": "$TEST_TRANSPORT", 00:28:51.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.604 "adrfam": "ipv4", 00:28:51.604 "trsvcid": "$NVMF_PORT", 00:28:51.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.604 "hdgst": ${hdgst:-false}, 00:28:51.604 "ddgst": ${ddgst:-false} 00:28:51.604 }, 00:28:51.604 "method": "bdev_nvme_attach_controller" 00:28:51.604 } 00:28:51.604 EOF 00:28:51.604 )") 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.604 { 00:28:51.604 "params": { 00:28:51.604 "name": "Nvme$subsystem", 00:28:51.604 "trtype": "$TEST_TRANSPORT", 00:28:51.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.604 "adrfam": "ipv4", 00:28:51.604 "trsvcid": "$NVMF_PORT", 00:28:51.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.604 "hdgst": ${hdgst:-false}, 00:28:51.604 "ddgst": ${ddgst:-false} 00:28:51.604 }, 00:28:51.604 "method": "bdev_nvme_attach_controller" 00:28:51.604 } 00:28:51.604 EOF 00:28:51.604 )") 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.604 { 00:28:51.604 "params": { 00:28:51.604 "name": "Nvme$subsystem", 00:28:51.604 "trtype": "$TEST_TRANSPORT", 00:28:51.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.604 "adrfam": "ipv4", 00:28:51.604 "trsvcid": "$NVMF_PORT", 00:28:51.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.604 "hdgst": ${hdgst:-false}, 00:28:51.604 "ddgst": ${ddgst:-false} 00:28:51.604 }, 00:28:51.604 "method": "bdev_nvme_attach_controller" 00:28:51.604 } 00:28:51.604 EOF 00:28:51.604 )") 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.604 { 00:28:51.604 "params": { 00:28:51.604 "name": "Nvme$subsystem", 00:28:51.604 "trtype": "$TEST_TRANSPORT", 00:28:51.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.604 "adrfam": "ipv4", 00:28:51.604 "trsvcid": "$NVMF_PORT", 00:28:51.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.604 "hdgst": ${hdgst:-false}, 00:28:51.604 "ddgst": ${ddgst:-false} 00:28:51.604 }, 00:28:51.604 "method": "bdev_nvme_attach_controller" 00:28:51.604 } 00:28:51.604 EOF 00:28:51.604 )") 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.604 { 00:28:51.604 "params": { 00:28:51.604 "name": "Nvme$subsystem", 00:28:51.604 "trtype": "$TEST_TRANSPORT", 00:28:51.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.604 "adrfam": "ipv4", 00:28:51.604 "trsvcid": "$NVMF_PORT", 00:28:51.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.604 "hdgst": ${hdgst:-false}, 00:28:51.604 "ddgst": ${ddgst:-false} 00:28:51.604 }, 00:28:51.604 "method": "bdev_nvme_attach_controller" 00:28:51.604 } 00:28:51.604 EOF 00:28:51.604 )") 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.604 { 00:28:51.604 "params": { 00:28:51.604 "name": "Nvme$subsystem", 00:28:51.604 "trtype": "$TEST_TRANSPORT", 00:28:51.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.604 "adrfam": "ipv4", 00:28:51.604 "trsvcid": "$NVMF_PORT", 00:28:51.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.604 "hdgst": ${hdgst:-false}, 00:28:51.604 "ddgst": ${ddgst:-false} 00:28:51.604 }, 00:28:51.604 "method": "bdev_nvme_attach_controller" 00:28:51.604 } 00:28:51.604 EOF 00:28:51.604 )") 00:28:51.604 [2024-12-15 05:29:05.116155] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:51.604 [2024-12-15 05:29:05.116203] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.604 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.604 { 00:28:51.604 "params": { 00:28:51.604 "name": "Nvme$subsystem", 00:28:51.604 "trtype": "$TEST_TRANSPORT", 00:28:51.604 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.604 "adrfam": "ipv4", 00:28:51.604 "trsvcid": "$NVMF_PORT", 00:28:51.604 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.604 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.604 "hdgst": ${hdgst:-false}, 00:28:51.604 "ddgst": ${ddgst:-false} 00:28:51.604 }, 00:28:51.605 "method": "bdev_nvme_attach_controller" 00:28:51.605 } 00:28:51.605 EOF 00:28:51.605 )") 00:28:51.605 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:51.605 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.605 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.605 { 00:28:51.605 "params": { 00:28:51.605 "name": "Nvme$subsystem", 00:28:51.605 "trtype": "$TEST_TRANSPORT", 00:28:51.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.605 "adrfam": "ipv4", 00:28:51.605 "trsvcid": "$NVMF_PORT", 00:28:51.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.605 "hdgst": ${hdgst:-false}, 00:28:51.605 "ddgst": ${ddgst:-false} 00:28:51.605 }, 00:28:51.605 "method": "bdev_nvme_attach_controller" 00:28:51.605 } 00:28:51.605 EOF 00:28:51.605 )") 00:28:51.605 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:51.605 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:51.605 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:51.605 { 00:28:51.605 "params": { 00:28:51.605 "name": "Nvme$subsystem", 00:28:51.605 "trtype": "$TEST_TRANSPORT", 00:28:51.605 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:51.605 "adrfam": "ipv4", 00:28:51.605 "trsvcid": "$NVMF_PORT", 00:28:51.605 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:51.605 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:51.605 "hdgst": ${hdgst:-false}, 00:28:51.605 "ddgst": ${ddgst:-false} 00:28:51.605 }, 00:28:51.605 "method": "bdev_nvme_attach_controller" 00:28:51.605 } 00:28:51.605 EOF 00:28:51.605 )") 00:28:51.605 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:51.605 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:51.605 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:51.605 05:29:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:51.605 "params": { 00:28:51.605 "name": "Nvme1", 00:28:51.605 "trtype": "tcp", 00:28:51.605 "traddr": "10.0.0.2", 00:28:51.605 "adrfam": "ipv4", 00:28:51.605 "trsvcid": "4420", 00:28:51.605 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:51.605 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:51.605 "hdgst": false, 00:28:51.605 "ddgst": false 00:28:51.605 }, 00:28:51.605 "method": "bdev_nvme_attach_controller" 00:28:51.605 },{ 00:28:51.605 "params": { 00:28:51.605 "name": "Nvme2", 00:28:51.605 "trtype": "tcp", 00:28:51.605 "traddr": "10.0.0.2", 00:28:51.605 "adrfam": "ipv4", 00:28:51.605 "trsvcid": "4420", 00:28:51.605 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:51.605 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:51.605 "hdgst": false, 00:28:51.605 "ddgst": false 00:28:51.605 }, 00:28:51.605 "method": "bdev_nvme_attach_controller" 00:28:51.605 },{ 00:28:51.605 "params": { 00:28:51.605 "name": "Nvme3", 00:28:51.605 "trtype": "tcp", 00:28:51.605 "traddr": "10.0.0.2", 00:28:51.605 "adrfam": "ipv4", 00:28:51.605 "trsvcid": "4420", 00:28:51.605 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:51.605 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:51.605 "hdgst": false, 00:28:51.605 "ddgst": false 00:28:51.605 }, 00:28:51.605 "method": "bdev_nvme_attach_controller" 00:28:51.605 },{ 00:28:51.605 "params": { 00:28:51.605 "name": "Nvme4", 00:28:51.605 "trtype": "tcp", 00:28:51.605 "traddr": "10.0.0.2", 00:28:51.605 "adrfam": "ipv4", 00:28:51.605 "trsvcid": "4420", 00:28:51.605 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:51.605 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:51.605 "hdgst": false, 00:28:51.605 "ddgst": false 00:28:51.605 }, 00:28:51.605 "method": "bdev_nvme_attach_controller" 00:28:51.605 },{ 00:28:51.605 "params": { 00:28:51.605 "name": "Nvme5", 00:28:51.605 "trtype": "tcp", 00:28:51.605 "traddr": "10.0.0.2", 00:28:51.605 "adrfam": "ipv4", 00:28:51.605 "trsvcid": "4420", 00:28:51.605 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:51.605 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:51.605 "hdgst": false, 00:28:51.605 "ddgst": false 00:28:51.605 }, 00:28:51.605 "method": "bdev_nvme_attach_controller" 00:28:51.605 },{ 00:28:51.605 "params": { 00:28:51.605 "name": "Nvme6", 00:28:51.605 "trtype": "tcp", 00:28:51.605 "traddr": "10.0.0.2", 00:28:51.605 "adrfam": "ipv4", 00:28:51.605 "trsvcid": "4420", 00:28:51.605 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:51.605 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:51.605 "hdgst": false, 00:28:51.605 "ddgst": false 00:28:51.605 }, 00:28:51.605 "method": "bdev_nvme_attach_controller" 00:28:51.605 },{ 00:28:51.605 "params": { 00:28:51.605 "name": "Nvme7", 00:28:51.605 "trtype": "tcp", 00:28:51.605 "traddr": "10.0.0.2", 00:28:51.605 "adrfam": "ipv4", 00:28:51.605 "trsvcid": "4420", 00:28:51.605 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:51.605 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:51.605 "hdgst": false, 00:28:51.605 "ddgst": false 00:28:51.605 }, 00:28:51.605 "method": "bdev_nvme_attach_controller" 00:28:51.605 },{ 00:28:51.605 "params": { 00:28:51.605 "name": "Nvme8", 00:28:51.605 "trtype": "tcp", 00:28:51.605 "traddr": "10.0.0.2", 00:28:51.605 "adrfam": "ipv4", 00:28:51.605 "trsvcid": "4420", 00:28:51.605 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:51.605 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:51.605 "hdgst": false, 00:28:51.605 "ddgst": false 00:28:51.605 }, 00:28:51.605 "method": "bdev_nvme_attach_controller" 00:28:51.605 },{ 00:28:51.605 "params": { 00:28:51.605 "name": "Nvme9", 00:28:51.605 "trtype": "tcp", 00:28:51.605 "traddr": "10.0.0.2", 00:28:51.605 "adrfam": "ipv4", 00:28:51.605 "trsvcid": "4420", 00:28:51.605 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:51.605 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:51.605 "hdgst": false, 00:28:51.605 "ddgst": false 00:28:51.605 }, 00:28:51.605 "method": "bdev_nvme_attach_controller" 00:28:51.605 },{ 00:28:51.605 "params": { 00:28:51.605 "name": "Nvme10", 00:28:51.605 "trtype": "tcp", 00:28:51.605 "traddr": "10.0.0.2", 00:28:51.605 "adrfam": "ipv4", 00:28:51.605 "trsvcid": "4420", 00:28:51.605 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:51.605 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:51.605 "hdgst": false, 00:28:51.605 "ddgst": false 00:28:51.605 }, 00:28:51.605 "method": "bdev_nvme_attach_controller" 00:28:51.605 }' 00:28:51.605 [2024-12-15 05:29:05.188342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:51.605 [2024-12-15 05:29:05.211258] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:53.576 05:29:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:53.576 05:29:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:28:53.577 05:29:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:53.577 05:29:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:53.577 05:29:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:28:53.577 05:29:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:53.577 05:29:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 425351 00:28:53.577 05:29:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:28:53.577 05:29:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:28:54.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 425351 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:28:54.520 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 425079 00:28:54.520 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:54.520 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:28:54.520 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # config=() 00:28:54.520 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@560 -- # local subsystem config 00:28:54.520 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.520 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.520 { 00:28:54.520 "params": { 00:28:54.520 "name": "Nvme$subsystem", 00:28:54.520 "trtype": "$TEST_TRANSPORT", 00:28:54.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.520 "adrfam": "ipv4", 00:28:54.520 "trsvcid": "$NVMF_PORT", 00:28:54.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.520 "hdgst": ${hdgst:-false}, 00:28:54.520 "ddgst": ${ddgst:-false} 00:28:54.520 }, 00:28:54.520 "method": "bdev_nvme_attach_controller" 00:28:54.520 } 00:28:54.520 EOF 00:28:54.520 )") 00:28:54.520 05:29:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:54.520 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.520 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.520 { 00:28:54.520 "params": { 00:28:54.520 "name": "Nvme$subsystem", 00:28:54.520 "trtype": "$TEST_TRANSPORT", 00:28:54.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.520 "adrfam": "ipv4", 00:28:54.520 "trsvcid": "$NVMF_PORT", 00:28:54.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.520 "hdgst": ${hdgst:-false}, 00:28:54.520 "ddgst": ${ddgst:-false} 00:28:54.520 }, 00:28:54.520 "method": "bdev_nvme_attach_controller" 00:28:54.520 } 00:28:54.520 EOF 00:28:54.520 )") 00:28:54.520 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:54.520 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.520 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.520 { 00:28:54.520 "params": { 00:28:54.520 "name": "Nvme$subsystem", 00:28:54.520 "trtype": "$TEST_TRANSPORT", 00:28:54.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.520 "adrfam": "ipv4", 00:28:54.520 "trsvcid": "$NVMF_PORT", 00:28:54.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.520 "hdgst": ${hdgst:-false}, 00:28:54.520 "ddgst": ${ddgst:-false} 00:28:54.520 }, 00:28:54.520 "method": "bdev_nvme_attach_controller" 00:28:54.520 } 00:28:54.520 EOF 00:28:54.520 )") 00:28:54.520 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:54.520 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.520 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.520 { 00:28:54.520 "params": { 00:28:54.520 "name": "Nvme$subsystem", 00:28:54.520 "trtype": "$TEST_TRANSPORT", 00:28:54.520 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.520 "adrfam": "ipv4", 00:28:54.520 "trsvcid": "$NVMF_PORT", 00:28:54.520 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.520 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.520 "hdgst": ${hdgst:-false}, 00:28:54.520 "ddgst": ${ddgst:-false} 00:28:54.520 }, 00:28:54.520 "method": "bdev_nvme_attach_controller" 00:28:54.520 } 00:28:54.520 EOF 00:28:54.520 )") 00:28:54.520 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:54.520 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.520 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.520 { 00:28:54.520 "params": { 00:28:54.521 "name": "Nvme$subsystem", 00:28:54.521 "trtype": "$TEST_TRANSPORT", 00:28:54.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "$NVMF_PORT", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.521 "hdgst": ${hdgst:-false}, 00:28:54.521 "ddgst": ${ddgst:-false} 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 } 00:28:54.521 EOF 00:28:54.521 )") 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.521 { 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme$subsystem", 00:28:54.521 "trtype": "$TEST_TRANSPORT", 00:28:54.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "$NVMF_PORT", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.521 "hdgst": ${hdgst:-false}, 00:28:54.521 "ddgst": ${ddgst:-false} 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 } 00:28:54.521 EOF 00:28:54.521 )") 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.521 { 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme$subsystem", 00:28:54.521 "trtype": "$TEST_TRANSPORT", 00:28:54.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "$NVMF_PORT", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.521 "hdgst": ${hdgst:-false}, 00:28:54.521 "ddgst": ${ddgst:-false} 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 } 00:28:54.521 EOF 00:28:54.521 )") 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:54.521 [2024-12-15 05:29:08.039835] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:28:54.521 [2024-12-15 05:29:08.039886] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425828 ] 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.521 { 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme$subsystem", 00:28:54.521 "trtype": "$TEST_TRANSPORT", 00:28:54.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "$NVMF_PORT", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.521 "hdgst": ${hdgst:-false}, 00:28:54.521 "ddgst": ${ddgst:-false} 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 } 00:28:54.521 EOF 00:28:54.521 )") 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.521 { 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme$subsystem", 00:28:54.521 "trtype": "$TEST_TRANSPORT", 00:28:54.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "$NVMF_PORT", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.521 "hdgst": ${hdgst:-false}, 00:28:54.521 "ddgst": ${ddgst:-false} 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 } 00:28:54.521 EOF 00:28:54.521 )") 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:28:54.521 { 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme$subsystem", 00:28:54.521 "trtype": "$TEST_TRANSPORT", 00:28:54.521 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "$NVMF_PORT", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:54.521 "hdgst": ${hdgst:-false}, 00:28:54.521 "ddgst": ${ddgst:-false} 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 } 00:28:54.521 EOF 00:28:54.521 )") 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@582 -- # cat 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@584 -- # jq . 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@585 -- # IFS=, 00:28:54.521 05:29:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme1", 00:28:54.521 "trtype": "tcp", 00:28:54.521 "traddr": "10.0.0.2", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "4420", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:54.521 "hdgst": false, 00:28:54.521 "ddgst": false 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 },{ 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme2", 00:28:54.521 "trtype": "tcp", 00:28:54.521 "traddr": "10.0.0.2", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "4420", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:54.521 "hdgst": false, 00:28:54.521 "ddgst": false 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 },{ 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme3", 00:28:54.521 "trtype": "tcp", 00:28:54.521 "traddr": "10.0.0.2", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "4420", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:28:54.521 "hdgst": false, 00:28:54.521 "ddgst": false 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 },{ 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme4", 00:28:54.521 "trtype": "tcp", 00:28:54.521 "traddr": "10.0.0.2", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "4420", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:28:54.521 "hdgst": false, 00:28:54.521 "ddgst": false 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 },{ 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme5", 00:28:54.521 "trtype": "tcp", 00:28:54.521 "traddr": "10.0.0.2", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "4420", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:28:54.521 "hdgst": false, 00:28:54.521 "ddgst": false 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 },{ 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme6", 00:28:54.521 "trtype": "tcp", 00:28:54.521 "traddr": "10.0.0.2", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "4420", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:28:54.521 "hdgst": false, 00:28:54.521 "ddgst": false 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 },{ 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme7", 00:28:54.521 "trtype": "tcp", 00:28:54.521 "traddr": "10.0.0.2", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "4420", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:28:54.521 "hdgst": false, 00:28:54.521 "ddgst": false 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 },{ 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme8", 00:28:54.521 "trtype": "tcp", 00:28:54.521 "traddr": "10.0.0.2", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "4420", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:28:54.521 "hdgst": false, 00:28:54.521 "ddgst": false 00:28:54.521 }, 00:28:54.521 "method": "bdev_nvme_attach_controller" 00:28:54.521 },{ 00:28:54.521 "params": { 00:28:54.521 "name": "Nvme9", 00:28:54.521 "trtype": "tcp", 00:28:54.521 "traddr": "10.0.0.2", 00:28:54.521 "adrfam": "ipv4", 00:28:54.521 "trsvcid": "4420", 00:28:54.521 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:28:54.521 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:28:54.521 "hdgst": false, 00:28:54.521 "ddgst": false 00:28:54.521 }, 00:28:54.522 "method": "bdev_nvme_attach_controller" 00:28:54.522 },{ 00:28:54.522 "params": { 00:28:54.522 "name": "Nvme10", 00:28:54.522 "trtype": "tcp", 00:28:54.522 "traddr": "10.0.0.2", 00:28:54.522 "adrfam": "ipv4", 00:28:54.522 "trsvcid": "4420", 00:28:54.522 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:28:54.522 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:28:54.522 "hdgst": false, 00:28:54.522 "ddgst": false 00:28:54.522 }, 00:28:54.522 "method": "bdev_nvme_attach_controller" 00:28:54.522 }' 00:28:54.522 [2024-12-15 05:29:08.115600] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:54.522 [2024-12-15 05:29:08.137959] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:55.899 Running I/O for 1 seconds... 00:28:57.277 2245.00 IOPS, 140.31 MiB/s 00:28:57.277 Latency(us) 00:28:57.277 [2024-12-15T04:29:10.964Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:57.277 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.277 Verification LBA range: start 0x0 length 0x400 00:28:57.277 Nvme1n1 : 1.15 283.02 17.69 0.00 0.00 222964.71 16727.28 210713.84 00:28:57.277 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.277 Verification LBA range: start 0x0 length 0x400 00:28:57.277 Nvme2n1 : 1.16 276.49 17.28 0.00 0.00 224622.69 16227.96 213709.78 00:28:57.277 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.277 Verification LBA range: start 0x0 length 0x400 00:28:57.277 Nvme3n1 : 1.14 281.94 17.62 0.00 0.00 218102.83 25590.25 203723.34 00:28:57.277 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.277 Verification LBA range: start 0x0 length 0x400 00:28:57.277 Nvme4n1 : 1.14 279.62 17.48 0.00 0.00 217626.14 14293.09 213709.78 00:28:57.277 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.277 Verification LBA range: start 0x0 length 0x400 00:28:57.277 Nvme5n1 : 1.16 275.14 17.20 0.00 0.00 218292.03 17351.44 227690.79 00:28:57.277 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.277 Verification LBA range: start 0x0 length 0x400 00:28:57.277 Nvme6n1 : 1.15 277.35 17.33 0.00 0.00 213331.48 14605.17 217704.35 00:28:57.277 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.277 Verification LBA range: start 0x0 length 0x400 00:28:57.277 Nvme7n1 : 1.17 274.50 17.16 0.00 0.00 212696.02 13981.01 228689.43 00:28:57.277 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.277 Verification LBA range: start 0x0 length 0x400 00:28:57.277 Nvme8n1 : 1.17 273.71 17.11 0.00 0.00 210329.70 13169.62 222697.57 00:28:57.277 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.277 Verification LBA range: start 0x0 length 0x400 00:28:57.277 Nvme9n1 : 1.17 272.96 17.06 0.00 0.00 207965.82 26963.38 219701.64 00:28:57.277 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:57.278 Verification LBA range: start 0x0 length 0x400 00:28:57.278 Nvme10n1 : 1.17 272.51 17.03 0.00 0.00 205243.54 16602.45 234681.30 00:28:57.278 [2024-12-15T04:29:10.965Z] =================================================================================================================== 00:28:57.278 [2024-12-15T04:29:10.965Z] Total : 2767.24 172.95 0.00 0.00 215129.74 13169.62 234681.30 00:28:57.278 05:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:28:57.278 05:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:28:57.278 05:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:57.278 05:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:57.278 05:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:28:57.278 05:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@516 -- # nvmfcleanup 00:28:57.278 05:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # sync 00:28:57.278 05:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:28:57.278 05:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set +e 00:28:57.278 05:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # for i in {1..20} 00:28:57.278 05:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:28:57.278 rmmod nvme_tcp 00:28:57.537 rmmod nvme_fabrics 00:28:57.537 rmmod nvme_keyring 00:28:57.537 05:29:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@128 -- # set -e 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # return 0 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@517 -- # '[' -n 425079 ']' 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@518 -- # killprocess 425079 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 425079 ']' 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 425079 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 425079 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 425079' 00:28:57.537 killing process with pid 425079 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 425079 00:28:57.537 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 425079 00:28:57.796 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:28:57.796 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:28:57.796 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:28:57.796 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # iptr 00:28:57.796 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-save 00:28:57.796 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:28:57.796 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@791 -- # iptables-restore 00:28:57.796 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:28:57.796 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:28:57.796 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:57.796 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:28:57.796 05:29:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:00.334 00:29:00.334 real 0m15.106s 00:29:00.334 user 0m33.982s 00:29:00.334 sys 0m5.653s 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:29:00.334 ************************************ 00:29:00.334 END TEST nvmf_shutdown_tc1 00:29:00.334 ************************************ 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:00.334 ************************************ 00:29:00.334 START TEST nvmf_shutdown_tc2 00:29:00.334 ************************************ 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # net_devs=() 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # e810=() 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # local -ga e810 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # x722=() 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # local -ga x722 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # mlx=() 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:00.334 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:00.334 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.334 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:00.335 Found net devices under 0000:af:00.0: cvl_0_0 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:00.335 Found net devices under 0000:af:00.1: cvl_0_1 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:00.335 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:00.335 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.260 ms 00:29:00.335 00:29:00.335 --- 10.0.0.2 ping statistics --- 00:29:00.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.335 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:00.335 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:00.335 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:29:00.335 00:29:00.335 --- 10.0.0.1 ping statistics --- 00:29:00.335 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:00.335 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # return 0 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@509 -- # nvmfpid=426837 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@510 -- # waitforlisten 426837 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 426837 ']' 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:00.335 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:00.335 05:29:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.335 [2024-12-15 05:29:13.940840] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:00.335 [2024-12-15 05:29:13.940882] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:00.335 [2024-12-15 05:29:14.014438] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:00.595 [2024-12-15 05:29:14.036918] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:00.595 [2024-12-15 05:29:14.036955] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:00.595 [2024-12-15 05:29:14.036966] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:00.595 [2024-12-15 05:29:14.036973] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:00.595 [2024-12-15 05:29:14.036978] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:00.595 [2024-12-15 05:29:14.038331] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:00.595 [2024-12-15 05:29:14.038440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:00.595 [2024-12-15 05:29:14.038553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:00.595 [2024-12-15 05:29:14.038554] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.595 [2024-12-15 05:29:14.169744] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:00.595 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:00.595 Malloc1 00:29:00.595 [2024-12-15 05:29:14.278884] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:00.854 Malloc2 00:29:00.854 Malloc3 00:29:00.854 Malloc4 00:29:00.854 Malloc5 00:29:00.854 Malloc6 00:29:00.854 Malloc7 00:29:01.113 Malloc8 00:29:01.113 Malloc9 00:29:01.113 Malloc10 00:29:01.113 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:01.113 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:01.113 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:01.113 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:01.113 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=427101 00:29:01.113 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 427101 /var/tmp/bdevperf.sock 00:29:01.113 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 427101 ']' 00:29:01.113 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:01.113 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:01.113 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:01.113 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:01.113 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:01.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:01.113 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # config=() 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@560 -- # local subsystem config 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.114 { 00:29:01.114 "params": { 00:29:01.114 "name": "Nvme$subsystem", 00:29:01.114 "trtype": "$TEST_TRANSPORT", 00:29:01.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.114 "adrfam": "ipv4", 00:29:01.114 "trsvcid": "$NVMF_PORT", 00:29:01.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.114 "hdgst": ${hdgst:-false}, 00:29:01.114 "ddgst": ${ddgst:-false} 00:29:01.114 }, 00:29:01.114 "method": "bdev_nvme_attach_controller" 00:29:01.114 } 00:29:01.114 EOF 00:29:01.114 )") 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.114 { 00:29:01.114 "params": { 00:29:01.114 "name": "Nvme$subsystem", 00:29:01.114 "trtype": "$TEST_TRANSPORT", 00:29:01.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.114 "adrfam": "ipv4", 00:29:01.114 "trsvcid": "$NVMF_PORT", 00:29:01.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.114 "hdgst": ${hdgst:-false}, 00:29:01.114 "ddgst": ${ddgst:-false} 00:29:01.114 }, 00:29:01.114 "method": "bdev_nvme_attach_controller" 00:29:01.114 } 00:29:01.114 EOF 00:29:01.114 )") 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.114 { 00:29:01.114 "params": { 00:29:01.114 "name": "Nvme$subsystem", 00:29:01.114 "trtype": "$TEST_TRANSPORT", 00:29:01.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.114 "adrfam": "ipv4", 00:29:01.114 "trsvcid": "$NVMF_PORT", 00:29:01.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.114 "hdgst": ${hdgst:-false}, 00:29:01.114 "ddgst": ${ddgst:-false} 00:29:01.114 }, 00:29:01.114 "method": "bdev_nvme_attach_controller" 00:29:01.114 } 00:29:01.114 EOF 00:29:01.114 )") 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.114 { 00:29:01.114 "params": { 00:29:01.114 "name": "Nvme$subsystem", 00:29:01.114 "trtype": "$TEST_TRANSPORT", 00:29:01.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.114 "adrfam": "ipv4", 00:29:01.114 "trsvcid": "$NVMF_PORT", 00:29:01.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.114 "hdgst": ${hdgst:-false}, 00:29:01.114 "ddgst": ${ddgst:-false} 00:29:01.114 }, 00:29:01.114 "method": "bdev_nvme_attach_controller" 00:29:01.114 } 00:29:01.114 EOF 00:29:01.114 )") 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.114 { 00:29:01.114 "params": { 00:29:01.114 "name": "Nvme$subsystem", 00:29:01.114 "trtype": "$TEST_TRANSPORT", 00:29:01.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.114 "adrfam": "ipv4", 00:29:01.114 "trsvcid": "$NVMF_PORT", 00:29:01.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.114 "hdgst": ${hdgst:-false}, 00:29:01.114 "ddgst": ${ddgst:-false} 00:29:01.114 }, 00:29:01.114 "method": "bdev_nvme_attach_controller" 00:29:01.114 } 00:29:01.114 EOF 00:29:01.114 )") 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.114 { 00:29:01.114 "params": { 00:29:01.114 "name": "Nvme$subsystem", 00:29:01.114 "trtype": "$TEST_TRANSPORT", 00:29:01.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.114 "adrfam": "ipv4", 00:29:01.114 "trsvcid": "$NVMF_PORT", 00:29:01.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.114 "hdgst": ${hdgst:-false}, 00:29:01.114 "ddgst": ${ddgst:-false} 00:29:01.114 }, 00:29:01.114 "method": "bdev_nvme_attach_controller" 00:29:01.114 } 00:29:01.114 EOF 00:29:01.114 )") 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.114 { 00:29:01.114 "params": { 00:29:01.114 "name": "Nvme$subsystem", 00:29:01.114 "trtype": "$TEST_TRANSPORT", 00:29:01.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.114 "adrfam": "ipv4", 00:29:01.114 "trsvcid": "$NVMF_PORT", 00:29:01.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.114 "hdgst": ${hdgst:-false}, 00:29:01.114 "ddgst": ${ddgst:-false} 00:29:01.114 }, 00:29:01.114 "method": "bdev_nvme_attach_controller" 00:29:01.114 } 00:29:01.114 EOF 00:29:01.114 )") 00:29:01.114 [2024-12-15 05:29:14.754698] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:01.114 [2024-12-15 05:29:14.754745] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid427101 ] 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.114 { 00:29:01.114 "params": { 00:29:01.114 "name": "Nvme$subsystem", 00:29:01.114 "trtype": "$TEST_TRANSPORT", 00:29:01.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.114 "adrfam": "ipv4", 00:29:01.114 "trsvcid": "$NVMF_PORT", 00:29:01.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.114 "hdgst": ${hdgst:-false}, 00:29:01.114 "ddgst": ${ddgst:-false} 00:29:01.114 }, 00:29:01.114 "method": "bdev_nvme_attach_controller" 00:29:01.114 } 00:29:01.114 EOF 00:29:01.114 )") 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.114 { 00:29:01.114 "params": { 00:29:01.114 "name": "Nvme$subsystem", 00:29:01.114 "trtype": "$TEST_TRANSPORT", 00:29:01.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.114 "adrfam": "ipv4", 00:29:01.114 "trsvcid": "$NVMF_PORT", 00:29:01.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.114 "hdgst": ${hdgst:-false}, 00:29:01.114 "ddgst": ${ddgst:-false} 00:29:01.114 }, 00:29:01.114 "method": "bdev_nvme_attach_controller" 00:29:01.114 } 00:29:01.114 EOF 00:29:01.114 )") 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:01.114 { 00:29:01.114 "params": { 00:29:01.114 "name": "Nvme$subsystem", 00:29:01.114 "trtype": "$TEST_TRANSPORT", 00:29:01.114 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:01.114 "adrfam": "ipv4", 00:29:01.114 "trsvcid": "$NVMF_PORT", 00:29:01.114 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:01.114 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:01.114 "hdgst": ${hdgst:-false}, 00:29:01.114 "ddgst": ${ddgst:-false} 00:29:01.114 }, 00:29:01.114 "method": "bdev_nvme_attach_controller" 00:29:01.114 } 00:29:01.114 EOF 00:29:01.114 )") 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@582 -- # cat 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@584 -- # jq . 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@585 -- # IFS=, 00:29:01.114 05:29:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:01.114 "params": { 00:29:01.114 "name": "Nvme1", 00:29:01.114 "trtype": "tcp", 00:29:01.114 "traddr": "10.0.0.2", 00:29:01.114 "adrfam": "ipv4", 00:29:01.114 "trsvcid": "4420", 00:29:01.114 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:01.114 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:01.115 "hdgst": false, 00:29:01.115 "ddgst": false 00:29:01.115 }, 00:29:01.115 "method": "bdev_nvme_attach_controller" 00:29:01.115 },{ 00:29:01.115 "params": { 00:29:01.115 "name": "Nvme2", 00:29:01.115 "trtype": "tcp", 00:29:01.115 "traddr": "10.0.0.2", 00:29:01.115 "adrfam": "ipv4", 00:29:01.115 "trsvcid": "4420", 00:29:01.115 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:01.115 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:01.115 "hdgst": false, 00:29:01.115 "ddgst": false 00:29:01.235 }, 00:29:01.235 "method": "bdev_nvme_attach_controller" 00:29:01.235 },{ 00:29:01.235 "params": { 00:29:01.235 "name": "Nvme3", 00:29:01.236 "trtype": "tcp", 00:29:01.236 "traddr": "10.0.0.2", 00:29:01.236 "adrfam": "ipv4", 00:29:01.236 "trsvcid": "4420", 00:29:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:01.236 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:01.236 "hdgst": false, 00:29:01.236 "ddgst": false 00:29:01.236 }, 00:29:01.236 "method": "bdev_nvme_attach_controller" 00:29:01.236 },{ 00:29:01.236 "params": { 00:29:01.236 "name": "Nvme4", 00:29:01.236 "trtype": "tcp", 00:29:01.236 "traddr": "10.0.0.2", 00:29:01.236 "adrfam": "ipv4", 00:29:01.236 "trsvcid": "4420", 00:29:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:01.236 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:01.236 "hdgst": false, 00:29:01.236 "ddgst": false 00:29:01.236 }, 00:29:01.236 "method": "bdev_nvme_attach_controller" 00:29:01.236 },{ 00:29:01.236 "params": { 00:29:01.236 "name": "Nvme5", 00:29:01.236 "trtype": "tcp", 00:29:01.236 "traddr": "10.0.0.2", 00:29:01.236 "adrfam": "ipv4", 00:29:01.236 "trsvcid": "4420", 00:29:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:01.236 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:01.236 "hdgst": false, 00:29:01.236 "ddgst": false 00:29:01.236 }, 00:29:01.236 "method": "bdev_nvme_attach_controller" 00:29:01.236 },{ 00:29:01.236 "params": { 00:29:01.236 "name": "Nvme6", 00:29:01.236 "trtype": "tcp", 00:29:01.236 "traddr": "10.0.0.2", 00:29:01.236 "adrfam": "ipv4", 00:29:01.236 "trsvcid": "4420", 00:29:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:01.236 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:01.236 "hdgst": false, 00:29:01.236 "ddgst": false 00:29:01.236 }, 00:29:01.236 "method": "bdev_nvme_attach_controller" 00:29:01.236 },{ 00:29:01.236 "params": { 00:29:01.236 "name": "Nvme7", 00:29:01.236 "trtype": "tcp", 00:29:01.236 "traddr": "10.0.0.2", 00:29:01.236 "adrfam": "ipv4", 00:29:01.236 "trsvcid": "4420", 00:29:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:01.236 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:01.236 "hdgst": false, 00:29:01.236 "ddgst": false 00:29:01.236 }, 00:29:01.236 "method": "bdev_nvme_attach_controller" 00:29:01.236 },{ 00:29:01.236 "params": { 00:29:01.236 "name": "Nvme8", 00:29:01.236 "trtype": "tcp", 00:29:01.236 "traddr": "10.0.0.2", 00:29:01.236 "adrfam": "ipv4", 00:29:01.236 "trsvcid": "4420", 00:29:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:01.236 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:01.236 "hdgst": false, 00:29:01.236 "ddgst": false 00:29:01.236 }, 00:29:01.236 "method": "bdev_nvme_attach_controller" 00:29:01.236 },{ 00:29:01.236 "params": { 00:29:01.236 "name": "Nvme9", 00:29:01.236 "trtype": "tcp", 00:29:01.236 "traddr": "10.0.0.2", 00:29:01.236 "adrfam": "ipv4", 00:29:01.236 "trsvcid": "4420", 00:29:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:01.236 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:01.236 "hdgst": false, 00:29:01.236 "ddgst": false 00:29:01.236 }, 00:29:01.236 "method": "bdev_nvme_attach_controller" 00:29:01.236 },{ 00:29:01.236 "params": { 00:29:01.236 "name": "Nvme10", 00:29:01.236 "trtype": "tcp", 00:29:01.236 "traddr": "10.0.0.2", 00:29:01.236 "adrfam": "ipv4", 00:29:01.236 "trsvcid": "4420", 00:29:01.236 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:01.236 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:01.236 "hdgst": false, 00:29:01.236 "ddgst": false 00:29:01.236 }, 00:29:01.236 "method": "bdev_nvme_attach_controller" 00:29:01.236 }' 00:29:01.495 [2024-12-15 05:29:14.831449] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:01.495 [2024-12-15 05:29:14.853928] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:02.871 Running I/O for 10 seconds... 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:29:03.131 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 427101 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 427101 ']' 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 427101 00:29:03.390 05:29:16 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:03.390 05:29:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:03.390 05:29:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 427101 00:29:03.390 05:29:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:03.390 05:29:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:03.390 05:29:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 427101' 00:29:03.390 killing process with pid 427101 00:29:03.390 05:29:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 427101 00:29:03.390 05:29:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 427101 00:29:03.649 Received shutdown signal, test time was about 0.733874 seconds 00:29:03.649 00:29:03.649 Latency(us) 00:29:03.649 [2024-12-15T04:29:17.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:03.649 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.649 Verification LBA range: start 0x0 length 0x400 00:29:03.649 Nvme1n1 : 0.73 349.14 21.82 0.00 0.00 180630.06 14417.92 200727.41 00:29:03.649 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.649 Verification LBA range: start 0x0 length 0x400 00:29:03.649 Nvme2n1 : 0.71 272.21 17.01 0.00 0.00 226841.36 20222.54 207717.91 00:29:03.649 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.649 Verification LBA range: start 0x0 length 0x400 00:29:03.649 Nvme3n1 : 0.72 305.54 19.10 0.00 0.00 193825.96 15603.81 199728.76 00:29:03.649 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.649 Verification LBA range: start 0x0 length 0x400 00:29:03.649 Nvme4n1 : 0.70 275.59 17.22 0.00 0.00 213220.21 16477.62 209715.20 00:29:03.649 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.649 Verification LBA range: start 0x0 length 0x400 00:29:03.649 Nvme5n1 : 0.72 265.05 16.57 0.00 0.00 217681.92 16227.96 215707.06 00:29:03.649 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.649 Verification LBA range: start 0x0 length 0x400 00:29:03.649 Nvme6n1 : 0.72 266.61 16.66 0.00 0.00 211060.70 17850.76 212711.13 00:29:03.649 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.649 Verification LBA range: start 0x0 length 0x400 00:29:03.649 Nvme7n1 : 0.71 271.58 16.97 0.00 0.00 201199.42 25964.74 174762.67 00:29:03.649 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.649 Verification LBA range: start 0x0 length 0x400 00:29:03.649 Nvme8n1 : 0.71 269.94 16.87 0.00 0.00 197998.20 14105.84 221698.93 00:29:03.649 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.649 Verification LBA range: start 0x0 length 0x400 00:29:03.649 Nvme9n1 : 0.73 262.83 16.43 0.00 0.00 199228.79 17351.44 227690.79 00:29:03.649 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:03.649 Verification LBA range: start 0x0 length 0x400 00:29:03.649 Nvme10n1 : 0.73 263.96 16.50 0.00 0.00 193048.38 16103.13 231685.36 00:29:03.649 [2024-12-15T04:29:17.336Z] =================================================================================================================== 00:29:03.649 [2024-12-15T04:29:17.336Z] Total : 2802.45 175.15 0.00 0.00 202616.98 14105.84 231685.36 00:29:03.649 05:29:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:29:05.026 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 426837 00:29:05.026 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:29:05.026 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:05.026 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:05.026 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # sync 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set +e 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:05.027 rmmod nvme_tcp 00:29:05.027 rmmod nvme_fabrics 00:29:05.027 rmmod nvme_keyring 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@128 -- # set -e 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # return 0 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@517 -- # '[' -n 426837 ']' 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@518 -- # killprocess 426837 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 426837 ']' 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 426837 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 426837 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 426837' 00:29:05.027 killing process with pid 426837 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 426837 00:29:05.027 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 426837 00:29:05.286 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:05.286 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:05.286 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:05.286 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # iptr 00:29:05.286 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-save 00:29:05.286 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:05.286 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@791 -- # iptables-restore 00:29:05.286 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:05.286 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:05.287 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:05.287 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:05.287 05:29:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.192 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:07.192 00:29:07.192 real 0m7.285s 00:29:07.192 user 0m21.416s 00:29:07.192 sys 0m1.315s 00:29:07.192 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:07.192 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:29:07.192 ************************************ 00:29:07.192 END TEST nvmf_shutdown_tc2 00:29:07.192 ************************************ 00:29:07.451 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:29:07.451 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:07.451 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:07.451 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:07.451 ************************************ 00:29:07.451 START TEST nvmf_shutdown_tc3 00:29:07.451 ************************************ 00:29:07.451 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:29:07.451 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:29:07.451 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:07.451 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:07.451 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # net_devs=() 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # e810=() 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # local -ga e810 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # x722=() 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # local -ga x722 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # mlx=() 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:07.452 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:07.452 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:07.452 Found net devices under 0000:af:00.0: cvl_0_0 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:07.452 Found net devices under 0000:af:00.1: cvl_0_1 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:07.452 05:29:20 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:07.453 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:07.453 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:07.453 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:07.453 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:07.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:07.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:29:07.712 00:29:07.712 --- 10.0.0.2 ping statistics --- 00:29:07.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.712 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:07.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:07.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:29:07.712 00:29:07.712 --- 10.0.0.1 ping statistics --- 00:29:07.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:07.712 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # return 0 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@509 -- # nvmfpid=428144 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@510 -- # waitforlisten 428144 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 428144 ']' 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:07.712 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:07.712 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.712 [2024-12-15 05:29:21.380797] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:07.712 [2024-12-15 05:29:21.380838] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:07.972 [2024-12-15 05:29:21.455343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:07.972 [2024-12-15 05:29:21.477866] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:07.972 [2024-12-15 05:29:21.477902] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:07.972 [2024-12-15 05:29:21.477910] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:07.972 [2024-12-15 05:29:21.477917] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:07.972 [2024-12-15 05:29:21.477923] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:07.972 [2024-12-15 05:29:21.479275] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:07.972 [2024-12-15 05:29:21.479384] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:07.972 [2024-12-15 05:29:21.479492] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:07.972 [2024-12-15 05:29:21.479493] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.972 [2024-12-15 05:29:21.623405] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:07.972 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:08.231 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.231 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:08.231 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.231 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:08.231 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.231 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:08.231 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.231 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:08.231 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:08.231 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:29:08.231 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:08.231 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:08.231 05:29:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:08.231 Malloc1 00:29:08.231 [2024-12-15 05:29:21.732922] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.231 Malloc2 00:29:08.231 Malloc3 00:29:08.231 Malloc4 00:29:08.231 Malloc5 00:29:08.490 Malloc6 00:29:08.490 Malloc7 00:29:08.490 Malloc8 00:29:08.490 Malloc9 00:29:08.490 Malloc10 00:29:08.490 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:08.490 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:08.490 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:08.490 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:08.490 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=428388 00:29:08.490 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 428388 /var/tmp/bdevperf.sock 00:29:08.490 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 428388 ']' 00:29:08.490 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:08.490 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:08.490 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.490 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:29:08.490 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:08.490 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:08.491 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # config=() 00:29:08.491 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.491 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@560 -- # local subsystem config 00:29:08.491 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:08.491 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.491 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.491 { 00:29:08.491 "params": { 00:29:08.491 "name": "Nvme$subsystem", 00:29:08.491 "trtype": "$TEST_TRANSPORT", 00:29:08.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.491 "adrfam": "ipv4", 00:29:08.491 "trsvcid": "$NVMF_PORT", 00:29:08.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.491 "hdgst": ${hdgst:-false}, 00:29:08.491 "ddgst": ${ddgst:-false} 00:29:08.491 }, 00:29:08.491 "method": "bdev_nvme_attach_controller" 00:29:08.491 } 00:29:08.491 EOF 00:29:08.491 )") 00:29:08.491 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.491 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.491 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.491 { 00:29:08.491 "params": { 00:29:08.491 "name": "Nvme$subsystem", 00:29:08.491 "trtype": "$TEST_TRANSPORT", 00:29:08.491 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.491 "adrfam": "ipv4", 00:29:08.491 "trsvcid": "$NVMF_PORT", 00:29:08.491 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.491 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.491 "hdgst": ${hdgst:-false}, 00:29:08.491 "ddgst": ${ddgst:-false} 00:29:08.491 }, 00:29:08.491 "method": "bdev_nvme_attach_controller" 00:29:08.491 } 00:29:08.491 EOF 00:29:08.491 )") 00:29:08.491 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.751 { 00:29:08.751 "params": { 00:29:08.751 "name": "Nvme$subsystem", 00:29:08.751 "trtype": "$TEST_TRANSPORT", 00:29:08.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.751 "adrfam": "ipv4", 00:29:08.751 "trsvcid": "$NVMF_PORT", 00:29:08.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.751 "hdgst": ${hdgst:-false}, 00:29:08.751 "ddgst": ${ddgst:-false} 00:29:08.751 }, 00:29:08.751 "method": "bdev_nvme_attach_controller" 00:29:08.751 } 00:29:08.751 EOF 00:29:08.751 )") 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.751 { 00:29:08.751 "params": { 00:29:08.751 "name": "Nvme$subsystem", 00:29:08.751 "trtype": "$TEST_TRANSPORT", 00:29:08.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.751 "adrfam": "ipv4", 00:29:08.751 "trsvcid": "$NVMF_PORT", 00:29:08.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.751 "hdgst": ${hdgst:-false}, 00:29:08.751 "ddgst": ${ddgst:-false} 00:29:08.751 }, 00:29:08.751 "method": "bdev_nvme_attach_controller" 00:29:08.751 } 00:29:08.751 EOF 00:29:08.751 )") 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.751 { 00:29:08.751 "params": { 00:29:08.751 "name": "Nvme$subsystem", 00:29:08.751 "trtype": "$TEST_TRANSPORT", 00:29:08.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.751 "adrfam": "ipv4", 00:29:08.751 "trsvcid": "$NVMF_PORT", 00:29:08.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.751 "hdgst": ${hdgst:-false}, 00:29:08.751 "ddgst": ${ddgst:-false} 00:29:08.751 }, 00:29:08.751 "method": "bdev_nvme_attach_controller" 00:29:08.751 } 00:29:08.751 EOF 00:29:08.751 )") 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.751 { 00:29:08.751 "params": { 00:29:08.751 "name": "Nvme$subsystem", 00:29:08.751 "trtype": "$TEST_TRANSPORT", 00:29:08.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.751 "adrfam": "ipv4", 00:29:08.751 "trsvcid": "$NVMF_PORT", 00:29:08.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.751 "hdgst": ${hdgst:-false}, 00:29:08.751 "ddgst": ${ddgst:-false} 00:29:08.751 }, 00:29:08.751 "method": "bdev_nvme_attach_controller" 00:29:08.751 } 00:29:08.751 EOF 00:29:08.751 )") 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.751 [2024-12-15 05:29:22.205010] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:08.751 [2024-12-15 05:29:22.205055] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid428388 ] 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.751 { 00:29:08.751 "params": { 00:29:08.751 "name": "Nvme$subsystem", 00:29:08.751 "trtype": "$TEST_TRANSPORT", 00:29:08.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.751 "adrfam": "ipv4", 00:29:08.751 "trsvcid": "$NVMF_PORT", 00:29:08.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.751 "hdgst": ${hdgst:-false}, 00:29:08.751 "ddgst": ${ddgst:-false} 00:29:08.751 }, 00:29:08.751 "method": "bdev_nvme_attach_controller" 00:29:08.751 } 00:29:08.751 EOF 00:29:08.751 )") 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.751 { 00:29:08.751 "params": { 00:29:08.751 "name": "Nvme$subsystem", 00:29:08.751 "trtype": "$TEST_TRANSPORT", 00:29:08.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.751 "adrfam": "ipv4", 00:29:08.751 "trsvcid": "$NVMF_PORT", 00:29:08.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.751 "hdgst": ${hdgst:-false}, 00:29:08.751 "ddgst": ${ddgst:-false} 00:29:08.751 }, 00:29:08.751 "method": "bdev_nvme_attach_controller" 00:29:08.751 } 00:29:08.751 EOF 00:29:08.751 )") 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.751 { 00:29:08.751 "params": { 00:29:08.751 "name": "Nvme$subsystem", 00:29:08.751 "trtype": "$TEST_TRANSPORT", 00:29:08.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.751 "adrfam": "ipv4", 00:29:08.751 "trsvcid": "$NVMF_PORT", 00:29:08.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.751 "hdgst": ${hdgst:-false}, 00:29:08.751 "ddgst": ${ddgst:-false} 00:29:08.751 }, 00:29:08.751 "method": "bdev_nvme_attach_controller" 00:29:08.751 } 00:29:08.751 EOF 00:29:08.751 )") 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:29:08.751 { 00:29:08.751 "params": { 00:29:08.751 "name": "Nvme$subsystem", 00:29:08.751 "trtype": "$TEST_TRANSPORT", 00:29:08.751 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.751 "adrfam": "ipv4", 00:29:08.751 "trsvcid": "$NVMF_PORT", 00:29:08.751 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.751 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.751 "hdgst": ${hdgst:-false}, 00:29:08.751 "ddgst": ${ddgst:-false} 00:29:08.751 }, 00:29:08.751 "method": "bdev_nvme_attach_controller" 00:29:08.751 } 00:29:08.751 EOF 00:29:08.751 )") 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@582 -- # cat 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@584 -- # jq . 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@585 -- # IFS=, 00:29:08.751 05:29:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:29:08.751 "params": { 00:29:08.751 "name": "Nvme1", 00:29:08.751 "trtype": "tcp", 00:29:08.751 "traddr": "10.0.0.2", 00:29:08.751 "adrfam": "ipv4", 00:29:08.751 "trsvcid": "4420", 00:29:08.751 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:08.751 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:08.751 "hdgst": false, 00:29:08.751 "ddgst": false 00:29:08.751 }, 00:29:08.751 "method": "bdev_nvme_attach_controller" 00:29:08.751 },{ 00:29:08.751 "params": { 00:29:08.751 "name": "Nvme2", 00:29:08.751 "trtype": "tcp", 00:29:08.751 "traddr": "10.0.0.2", 00:29:08.751 "adrfam": "ipv4", 00:29:08.751 "trsvcid": "4420", 00:29:08.751 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:08.751 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:29:08.751 "hdgst": false, 00:29:08.751 "ddgst": false 00:29:08.751 }, 00:29:08.751 "method": "bdev_nvme_attach_controller" 00:29:08.751 },{ 00:29:08.751 "params": { 00:29:08.751 "name": "Nvme3", 00:29:08.751 "trtype": "tcp", 00:29:08.751 "traddr": "10.0.0.2", 00:29:08.751 "adrfam": "ipv4", 00:29:08.751 "trsvcid": "4420", 00:29:08.751 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:29:08.751 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:29:08.752 "hdgst": false, 00:29:08.752 "ddgst": false 00:29:08.752 }, 00:29:08.752 "method": "bdev_nvme_attach_controller" 00:29:08.752 },{ 00:29:08.752 "params": { 00:29:08.752 "name": "Nvme4", 00:29:08.752 "trtype": "tcp", 00:29:08.752 "traddr": "10.0.0.2", 00:29:08.752 "adrfam": "ipv4", 00:29:08.752 "trsvcid": "4420", 00:29:08.752 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:29:08.752 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:29:08.752 "hdgst": false, 00:29:08.752 "ddgst": false 00:29:08.752 }, 00:29:08.752 "method": "bdev_nvme_attach_controller" 00:29:08.752 },{ 00:29:08.752 "params": { 00:29:08.752 "name": "Nvme5", 00:29:08.752 "trtype": "tcp", 00:29:08.752 "traddr": "10.0.0.2", 00:29:08.752 "adrfam": "ipv4", 00:29:08.752 "trsvcid": "4420", 00:29:08.752 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:29:08.752 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:29:08.752 "hdgst": false, 00:29:08.752 "ddgst": false 00:29:08.752 }, 00:29:08.752 "method": "bdev_nvme_attach_controller" 00:29:08.752 },{ 00:29:08.752 "params": { 00:29:08.752 "name": "Nvme6", 00:29:08.752 "trtype": "tcp", 00:29:08.752 "traddr": "10.0.0.2", 00:29:08.752 "adrfam": "ipv4", 00:29:08.752 "trsvcid": "4420", 00:29:08.752 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:29:08.752 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:29:08.752 "hdgst": false, 00:29:08.752 "ddgst": false 00:29:08.752 }, 00:29:08.752 "method": "bdev_nvme_attach_controller" 00:29:08.752 },{ 00:29:08.752 "params": { 00:29:08.752 "name": "Nvme7", 00:29:08.752 "trtype": "tcp", 00:29:08.752 "traddr": "10.0.0.2", 00:29:08.752 "adrfam": "ipv4", 00:29:08.752 "trsvcid": "4420", 00:29:08.752 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:29:08.752 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:29:08.752 "hdgst": false, 00:29:08.752 "ddgst": false 00:29:08.752 }, 00:29:08.752 "method": "bdev_nvme_attach_controller" 00:29:08.752 },{ 00:29:08.752 "params": { 00:29:08.752 "name": "Nvme8", 00:29:08.752 "trtype": "tcp", 00:29:08.752 "traddr": "10.0.0.2", 00:29:08.752 "adrfam": "ipv4", 00:29:08.752 "trsvcid": "4420", 00:29:08.752 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:29:08.752 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:29:08.752 "hdgst": false, 00:29:08.752 "ddgst": false 00:29:08.752 }, 00:29:08.752 "method": "bdev_nvme_attach_controller" 00:29:08.752 },{ 00:29:08.752 "params": { 00:29:08.752 "name": "Nvme9", 00:29:08.752 "trtype": "tcp", 00:29:08.752 "traddr": "10.0.0.2", 00:29:08.752 "adrfam": "ipv4", 00:29:08.752 "trsvcid": "4420", 00:29:08.752 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:29:08.752 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:29:08.752 "hdgst": false, 00:29:08.752 "ddgst": false 00:29:08.752 }, 00:29:08.752 "method": "bdev_nvme_attach_controller" 00:29:08.752 },{ 00:29:08.752 "params": { 00:29:08.752 "name": "Nvme10", 00:29:08.752 "trtype": "tcp", 00:29:08.752 "traddr": "10.0.0.2", 00:29:08.752 "adrfam": "ipv4", 00:29:08.752 "trsvcid": "4420", 00:29:08.752 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:29:08.752 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:29:08.752 "hdgst": false, 00:29:08.752 "ddgst": false 00:29:08.752 }, 00:29:08.752 "method": "bdev_nvme_attach_controller" 00:29:08.752 }' 00:29:08.752 [2024-12-15 05:29:22.283306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:08.752 [2024-12-15 05:29:22.305626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:10.655 Running I/O for 10 seconds... 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:29:10.655 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:29:10.933 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 428144 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 428144 ']' 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 428144 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 428144 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 428144' 00:29:10.934 killing process with pid 428144 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 428144 00:29:10.934 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 428144 00:29:10.934 [2024-12-15 05:29:24.483086] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483201] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483213] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483219] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483225] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483256] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483262] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483268] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483275] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483287] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483293] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483305] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483317] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483323] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483329] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483335] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483341] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483347] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483359] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483384] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483390] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483396] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483402] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483408] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483414] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483439] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483452] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483458] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483464] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483476] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483525] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.483551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e5b40 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.485316] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.934 [2024-12-15 05:29:24.485345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485366] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485373] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485418] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485437] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485462] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485469] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485503] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485509] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485528] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485534] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485547] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485559] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485578] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485634] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485664] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485713] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485719] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.485732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc5d2d0 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487475] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487488] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487495] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487501] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487507] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487538] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487567] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.935 [2024-12-15 05:29:24.487582] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487596] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487602] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487608] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487614] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487620] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487627] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487633] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487639] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487645] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487651] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487663] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487669] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487695] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487707] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487714] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487720] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487772] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487785] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487829] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487835] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487841] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487847] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487853] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6030 is same with the state(6) to be set 00:29:10.936 [2024-12-15 05:29:24.487943] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:10.936 [2024-12-15 05:29:24.489459] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:10.936 [2024-12-15 05:29:24.490005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.936 [2024-12-15 05:29:24.490307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.936 [2024-12-15 05:29:24.490315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.937 [2024-12-15 05:29:24.490903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.937 [2024-12-15 05:29:24.490911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.938 [2024-12-15 05:29:24.490917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.490926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.938 [2024-12-15 05:29:24.490932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.490940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.938 [2024-12-15 05:29:24.490947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.490955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.938 [2024-12-15 05:29:24.490961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.490969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.938 [2024-12-15 05:29:24.490975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.490983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.938 [2024-12-15 05:29:24.490996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.491004] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1566a00 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.492745] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:10.938 [2024-12-15 05:29:24.492803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1715120 (9): Bad file descriptor 00:29:10.938 [2024-12-15 05:29:24.492852] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.938 [2024-12-15 05:29:24.492866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.492874] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.938 [2024-12-15 05:29:24.492881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.492888] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.938 [2024-12-15 05:29:24.492895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.492902] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.938 [2024-12-15 05:29:24.492908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.492915] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b4420 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.492951] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.938 [2024-12-15 05:29:24.492959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.492967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.938 [2024-12-15 05:29:24.492973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.492980] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.938 [2024-12-15 05:29:24.492986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.493000] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.938 [2024-12-15 05:29:24.493007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.938 [2024-12-15 05:29:24.493014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc0b0 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493086] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:10.938 [2024-12-15 05:29:24.493377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493428] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493441] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493447] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493468] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493480] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493486] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493500] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493506] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493526] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493532] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493537] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493569] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493575] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493619] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493626] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.938 [2024-12-15 05:29:24.493632] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493640] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493646] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493652] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493658] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493671] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493677] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493708] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493741] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493747] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493753] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493766] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493773] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493784] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493791] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.493817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6500 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495083] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495104] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495110] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495117] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495164] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495171] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495177] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495197] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495209] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495215] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495234] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495240] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495251] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495257] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495270] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495277] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495302] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495308] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495314] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495320] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495326] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495333] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495371] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495377] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495383] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495389] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495401] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495407] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495413] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495420] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495426] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495432] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.939 [2024-12-15 05:29:24.495438] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.495444] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.495449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.495455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.495461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.495467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e69f0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.495625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.940 [2024-12-15 05:29:24.495651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1715120 with addr=10.0.0.2, port=4420 00:29:10.940 [2024-12-15 05:29:24.495660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1715120 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.495979] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1715120 (9): Bad file descriptor 00:29:10.940 [2024-12-15 05:29:24.496263] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496286] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496292] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496299] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496339] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496351] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496357] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496379] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496386] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496399] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496419] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496424] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496430] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496436] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496449] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496455] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496461] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496479] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496485] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496491] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496498] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with t[2024-12-15 05:29:24.496499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in erhe state(6) to be set 00:29:10.940 ror state 00:29:10.940 [2024-12-15 05:29:24.496513] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:10.940 [2024-12-15 05:29:24.496519] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with t[2024-12-15 05:29:24.496527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:10.940 he state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496540] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496541] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:10.940 [2024-12-15 05:29:24.496546] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496553] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496560] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496566] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496572] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496598] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496604] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496610] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496635] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496648] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496654] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496660] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496666] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496672] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496678] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496684] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.496690] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e6ec0 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.497193] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:10.940 [2024-12-15 05:29:24.497466] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:10.940 [2024-12-15 05:29:24.498185] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:10.940 [2024-12-15 05:29:24.500445] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.500471] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.940 [2024-12-15 05:29:24.500478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500484] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500490] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500496] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500502] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500508] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500514] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500520] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500527] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500544] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500550] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500556] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500562] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500579] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500585] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500591] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500597] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500617] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500624] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500630] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500637] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500643] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500649] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500655] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500661] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500674] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500680] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500686] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500691] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500697] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500703] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500715] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500721] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500727] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500733] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500762] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500768] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500828] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500834] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.500840] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7240 is same with the state(6) to be set 00:29:10.941 [2024-12-15 05:29:24.501499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.941 [2024-12-15 05:29:24.501518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.941 [2024-12-15 05:29:24.501535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.941 [2024-12-15 05:29:24.501542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.941 [2024-12-15 05:29:24.501551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.941 [2024-12-15 05:29:24.501559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.941 [2024-12-15 05:29:24.501567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.941 [2024-12-15 05:29:24.501574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.941 [2024-12-15 05:29:24.501582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.941 [2024-12-15 05:29:24.501589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.941 [2024-12-15 05:29:24.501597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.941 [2024-12-15 05:29:24.501603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.941 [2024-12-15 05:29:24.501611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.941 [2024-12-15 05:29:24.501618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.941 [2024-12-15 05:29:24.501626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.941 [2024-12-15 05:29:24.501632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.941 [2024-12-15 05:29:24.501641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.941 [2024-12-15 05:29:24.501647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.941 [2024-12-15 05:29:24.501659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.941 [2024-12-15 05:29:24.501666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.941 [2024-12-15 05:29:24.501674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.941 [2024-12-15 05:29:24.501681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.941 [2024-12-15 05:29:24.501690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.941 [2024-12-15 05:29:24.501696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.941 [2024-12-15 05:29:24.501705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501832] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with t[2024-12-15 05:29:24.501834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:29:10.942 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501845] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501852] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501859] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501874] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501882] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501890] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with t[2024-12-15 05:29:24.501897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:12he state(6) to be set 00:29:10.942 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501905] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with t[2024-12-15 05:29:24.501906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:29:10.942 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501915] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501922] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501928] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501946] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501954] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501960] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.501974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.501982] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.501990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.502001] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.502004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.502009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.502013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.502017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.502020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.502024] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.502031] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.502036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:12[2024-12-15 05:29:24.502037] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 he state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.502045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-15 05:29:24.502045] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 he state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.502056] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.502057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.502062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with t[2024-12-15 05:29:24.502065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:29:10.942 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.942 [2024-12-15 05:29:24.502073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.502076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.942 [2024-12-15 05:29:24.502080] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.942 [2024-12-15 05:29:24.502084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502094] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with t[2024-12-15 05:29:24.502109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:12he state(6) to be set 00:29:10.943 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502125] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502131] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502145] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with t[2024-12-15 05:29:24.502145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:12he state(6) to be set 00:29:10.943 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502154] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502161] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502168] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502176] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502184] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502191] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502199] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with t[2024-12-15 05:29:24.502199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:12he state(6) to be set 00:29:10.943 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502208] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with t[2024-12-15 05:29:24.502209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:29:10.943 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502217] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502224] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502230] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:12[2024-12-15 05:29:24.502237] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with t8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 he state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-12-15 05:29:24.502247] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with tdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 he state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with t[2024-12-15 05:29:24.502258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:12he state(6) to be set 00:29:10.943 8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502267] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502274] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502281] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502289] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502303] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with t[2024-12-15 05:29:24.502304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 che state(6) to be set 00:29:10.943 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7710 is same with the state(6) to be set 00:29:10.943 [2024-12-15 05:29:24.502314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.943 [2024-12-15 05:29:24.502425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.943 [2024-12-15 05:29:24.502434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-12-15 05:29:24.502440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.944 [2024-12-15 05:29:24.502448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-12-15 05:29:24.502455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.944 [2024-12-15 05:29:24.502464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-12-15 05:29:24.502470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.944 [2024-12-15 05:29:24.502478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-12-15 05:29:24.502484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.944 [2024-12-15 05:29:24.502492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-12-15 05:29:24.502499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.944 [2024-12-15 05:29:24.502507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-12-15 05:29:24.502513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.944 [2024-12-15 05:29:24.502521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-12-15 05:29:24.502528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.944 [2024-12-15 05:29:24.502535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-12-15 05:29:24.502543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.944 [2024-12-15 05:29:24.502551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-12-15 05:29:24.502558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.944 [2024-12-15 05:29:24.502567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.944 [2024-12-15 05:29:24.502573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.944 [2024-12-15 05:29:24.502581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2169840 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503111] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503118] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503151] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503157] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503163] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503169] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503174] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503181] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503187] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503193] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503198] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503204] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503210] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503216] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503222] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503228] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503233] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503239] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503245] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503252] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503258] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503264] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503269] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503276] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503282] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503288] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503294] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503300] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503306] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503325] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503331] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503352] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503358] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503364] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503370] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503376] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503382] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503388] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503393] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503400] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503406] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503412] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503417] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503423] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503429] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.944 [2024-12-15 05:29:24.503731] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:10.944 [2024-12-15 05:29:24.503781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b2ad0 (9): Bad file descriptor 00:29:10.944 [2024-12-15 05:29:24.503805] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.944 [2024-12-15 05:29:24.503813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.944 [2024-12-15 05:29:24.503820] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.944 [2024-12-15 05:29:24.503827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.944 [2024-12-15 05:29:24.503834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.503844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.503851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.503857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.503864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a6270 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.503886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b4420 (9): Bad file descriptor 00:29:10.945 [2024-12-15 05:29:24.503910] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.503918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.503925] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.503931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.503939] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.503945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.503952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.503958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.503965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b0690 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.503989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.504004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.504012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.504018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.504025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.504031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.504039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.504047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.504054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171ccb0 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.504068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bc0b0 (9): Bad file descriptor 00:29:10.945 [2024-12-15 05:29:24.504091] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.504101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.504109] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.504116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.504122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.504129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.504135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.504142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.504148] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad2c0 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.504170] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.504178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.504185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.504192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.504199] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.504206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.504213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.945 [2024-12-15 05:29:24.504219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.504225] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6ff0 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.504819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.945 [2024-12-15 05:29:24.504838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b2ad0 with addr=10.0.0.2, port=4420 00:29:10.945 [2024-12-15 05:29:24.504846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2ad0 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.504909] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:29:10.945 [2024-12-15 05:29:24.504966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:10.945 [2024-12-15 05:29:24.504985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b2ad0 (9): Bad file descriptor 00:29:10.945 [2024-12-15 05:29:24.505213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.945 [2024-12-15 05:29:24.505228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1715120 with addr=10.0.0.2, port=4420 00:29:10.945 [2024-12-15 05:29:24.505236] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1715120 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.505244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:10.945 [2024-12-15 05:29:24.505253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:10.945 [2024-12-15 05:29:24.505261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:10.945 [2024-12-15 05:29:24.505268] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:10.945 [2024-12-15 05:29:24.505343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1715120 (9): Bad file descriptor 00:29:10.945 [2024-12-15 05:29:24.505411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:10.945 [2024-12-15 05:29:24.505419] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:10.945 [2024-12-15 05:29:24.505426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:10.945 [2024-12-15 05:29:24.505431] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:10.945 [2024-12-15 05:29:24.511701] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.511711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.511718] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.511726] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.511732] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.511738] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.511745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.511751] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9e7c00 is same with the state(6) to be set 00:29:10.945 [2024-12-15 05:29:24.511928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-12-15 05:29:24.511943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.511954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-12-15 05:29:24.511961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.511970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-12-15 05:29:24.511976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.511985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-12-15 05:29:24.511995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.512005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-12-15 05:29:24.512011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.945 [2024-12-15 05:29:24.512019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.945 [2024-12-15 05:29:24.512029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.946 [2024-12-15 05:29:24.512660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.946 [2024-12-15 05:29:24.512667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.512948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.512956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2604a00 is same with the state(6) to be set 00:29:10.947 [2024-12-15 05:29:24.513925] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:10.947 [2024-12-15 05:29:24.513943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171ccb0 (9): Bad file descriptor 00:29:10.947 [2024-12-15 05:29:24.513954] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a6270 (9): Bad file descriptor 00:29:10.947 [2024-12-15 05:29:24.513984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.947 [2024-12-15 05:29:24.514000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.514008] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.947 [2024-12-15 05:29:24.514014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.514021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.947 [2024-12-15 05:29:24.514030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.514037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:29:10.947 [2024-12-15 05:29:24.514044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.514051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711b50 is same with the state(6) to be set 00:29:10.947 [2024-12-15 05:29:24.514071] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b0690 (9): Bad file descriptor 00:29:10.947 [2024-12-15 05:29:24.514091] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ad2c0 (9): Bad file descriptor 00:29:10.947 [2024-12-15 05:29:24.514104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6ff0 (9): Bad file descriptor 00:29:10.947 [2024-12-15 05:29:24.514199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.514209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.514219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.514226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.514235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.514242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.514250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.514257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.514265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.514272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.514280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.514287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.514294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.514301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.514309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.947 [2024-12-15 05:29:24.514316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.947 [2024-12-15 05:29:24.514324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.514880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.514886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.519358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.948 [2024-12-15 05:29:24.519368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.948 [2024-12-15 05:29:24.519377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.519607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.519614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b6ba0 is same with the state(6) to be set 00:29:10.949 [2024-12-15 05:29:24.520600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.949 [2024-12-15 05:29:24.520965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.949 [2024-12-15 05:29:24.520973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.520979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.520987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.520997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.950 [2024-12-15 05:29:24.521560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.950 [2024-12-15 05:29:24.521568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b7d30 is same with the state(6) to be set 00:29:10.950 [2024-12-15 05:29:24.522760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:10.950 [2024-12-15 05:29:24.522777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:10.951 [2024-12-15 05:29:24.523031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.951 [2024-12-15 05:29:24.523046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171ccb0 with addr=10.0.0.2, port=4420 00:29:10.951 [2024-12-15 05:29:24.523054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171ccb0 is same with the state(6) to be set 00:29:10.951 [2024-12-15 05:29:24.523346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.951 [2024-12-15 05:29:24.523362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b4420 with addr=10.0.0.2, port=4420 00:29:10.951 [2024-12-15 05:29:24.523370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b4420 is same with the state(6) to be set 00:29:10.951 [2024-12-15 05:29:24.523492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.951 [2024-12-15 05:29:24.523503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12bc0b0 with addr=10.0.0.2, port=4420 00:29:10.951 [2024-12-15 05:29:24.523510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc0b0 is same with the state(6) to be set 00:29:10.951 [2024-12-15 05:29:24.523519] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171ccb0 (9): Bad file descriptor 00:29:10.951 [2024-12-15 05:29:24.523984] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:10.951 [2024-12-15 05:29:24.524008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:10.951 [2024-12-15 05:29:24.524028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b4420 (9): Bad file descriptor 00:29:10.951 [2024-12-15 05:29:24.524037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bc0b0 (9): Bad file descriptor 00:29:10.951 [2024-12-15 05:29:24.524044] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:10.951 [2024-12-15 05:29:24.524051] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:10.951 [2024-12-15 05:29:24.524059] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:10.951 [2024-12-15 05:29:24.524066] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:10.951 [2024-12-15 05:29:24.524080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1711b50 (9): Bad file descriptor 00:29:10.951 [2024-12-15 05:29:24.524376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.951 [2024-12-15 05:29:24.524391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b2ad0 with addr=10.0.0.2, port=4420 00:29:10.951 [2024-12-15 05:29:24.524399] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2ad0 is same with the state(6) to be set 00:29:10.951 [2024-12-15 05:29:24.524596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.951 [2024-12-15 05:29:24.524606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1715120 with addr=10.0.0.2, port=4420 00:29:10.951 [2024-12-15 05:29:24.524613] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1715120 is same with the state(6) to be set 00:29:10.951 [2024-12-15 05:29:24.524620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:10.951 [2024-12-15 05:29:24.524626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:10.951 [2024-12-15 05:29:24.524633] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:10.951 [2024-12-15 05:29:24.524640] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:10.951 [2024-12-15 05:29:24.524647] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:10.951 [2024-12-15 05:29:24.524653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:10.951 [2024-12-15 05:29:24.524660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:10.951 [2024-12-15 05:29:24.524666] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:10.951 [2024-12-15 05:29:24.524705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.524982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.524990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.525003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.525011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.525018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.525026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.525033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.525041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.525048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.951 [2024-12-15 05:29:24.525056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.951 [2024-12-15 05:29:24.525063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.952 [2024-12-15 05:29:24.525579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.952 [2024-12-15 05:29:24.525587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.525595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.525602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.525610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.525616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.525624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.525631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.525639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.525646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.525654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.525661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.525669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.525676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.525683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16b1130 is same with the state(6) to be set 00:29:10.953 [2024-12-15 05:29:24.526666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.526981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.526987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.527009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.527024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.527039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.527054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.527070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.527087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.527102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.527117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.527132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.527147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.527162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.527176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.953 [2024-12-15 05:29:24.527191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.953 [2024-12-15 05:29:24.527199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.527691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.527700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16be320 is same with the state(6) to be set 00:29:10.954 [2024-12-15 05:29:24.528683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.528696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.528707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.528714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.528723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.528730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.528739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.528746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.528755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.528762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.528771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.528777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.528786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.528792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.528800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.528807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.528815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.528822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.528830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.954 [2024-12-15 05:29:24.528837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.954 [2024-12-15 05:29:24.528845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.528852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.528860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.528867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.528879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.528886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.528894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.528900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.528909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.528915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.528923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.528930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.528939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.528946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.528953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.528960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.528969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.528975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.528984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.528990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.955 [2024-12-15 05:29:24.529431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.955 [2024-12-15 05:29:24.529438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.529684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.529692] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bf650 is same with the state(6) to be set 00:29:10.956 [2024-12-15 05:29:24.530668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.956 [2024-12-15 05:29:24.530933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.956 [2024-12-15 05:29:24.530940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.530948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.530954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.530962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.530969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.530977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.530984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.530996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.957 [2024-12-15 05:29:24.531536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.957 [2024-12-15 05:29:24.531544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.958 [2024-12-15 05:29:24.531550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.958 [2024-12-15 05:29:24.531558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.958 [2024-12-15 05:29:24.531564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.958 [2024-12-15 05:29:24.531572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.958 [2024-12-15 05:29:24.531581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.958 [2024-12-15 05:29:24.531589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.958 [2024-12-15 05:29:24.531595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.958 [2024-12-15 05:29:24.531603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.958 [2024-12-15 05:29:24.531610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.958 [2024-12-15 05:29:24.531618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:10.958 [2024-12-15 05:29:24.531625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:10.958 [2024-12-15 05:29:24.531631] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23b7020 is same with the state(6) to be set 00:29:10.958 [2024-12-15 05:29:24.532596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:29:10.958 [2024-12-15 05:29:24.532613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:29:10.958 [2024-12-15 05:29:24.532623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:29:10.958 task offset: 8704 on job bdev=Nvme10n1 fails 00:29:10.958 00:29:10.958 Latency(us) 00:29:10.958 [2024-12-15T04:29:24.645Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:10.958 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.958 Job: Nvme1n1 ended in about 0.65 seconds with error 00:29:10.958 Verification LBA range: start 0x0 length 0x400 00:29:10.958 Nvme1n1 : 0.65 196.42 12.28 98.21 0.00 214332.30 29834.48 213709.78 00:29:10.958 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.958 Job: Nvme2n1 ended in about 0.65 seconds with error 00:29:10.958 Verification LBA range: start 0x0 length 0x400 00:29:10.958 Nvme2n1 : 0.65 195.84 12.24 97.92 0.00 209763.31 17975.59 208716.56 00:29:10.958 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.958 Job: Nvme3n1 ended in about 0.66 seconds with error 00:29:10.958 Verification LBA range: start 0x0 length 0x400 00:29:10.958 Nvme3n1 : 0.66 202.22 12.64 97.31 0.00 200764.79 20222.54 202724.69 00:29:10.958 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.958 Job: Nvme4n1 ended in about 0.66 seconds with error 00:29:10.958 Verification LBA range: start 0x0 length 0x400 00:29:10.958 Nvme4n1 : 0.66 194.02 12.13 97.01 0.00 201446.48 17226.61 210713.84 00:29:10.958 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.958 Job: Nvme5n1 ended in about 0.66 seconds with error 00:29:10.958 Verification LBA range: start 0x0 length 0x400 00:29:10.958 Nvme5n1 : 0.66 193.44 12.09 96.72 0.00 196988.67 21470.84 189742.32 00:29:10.958 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.958 Job: Nvme6n1 ended in about 0.63 seconds with error 00:29:10.958 Verification LBA range: start 0x0 length 0x400 00:29:10.958 Nvme6n1 : 0.63 201.63 12.60 100.82 0.00 182999.53 4930.80 212711.13 00:29:10.958 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.958 Job: Nvme7n1 ended in about 0.66 seconds with error 00:29:10.958 Verification LBA range: start 0x0 length 0x400 00:29:10.958 Nvme7n1 : 0.66 192.87 12.05 96.44 0.00 187467.58 15291.73 207717.91 00:29:10.958 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.958 Job: Nvme8n1 ended in about 0.65 seconds with error 00:29:10.958 Verification LBA range: start 0x0 length 0x400 00:29:10.958 Nvme8n1 : 0.65 198.42 12.40 99.21 0.00 176135.15 15978.30 211712.49 00:29:10.958 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.958 Verification LBA range: start 0x0 length 0x400 00:29:10.958 Nvme9n1 : 0.63 204.34 12.77 0.00 0.00 247302.34 31082.79 226692.14 00:29:10.958 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:10.958 Job: Nvme10n1 ended in about 0.62 seconds with error 00:29:10.958 Verification LBA range: start 0x0 length 0x400 00:29:10.958 Nvme10n1 : 0.62 108.98 6.81 102.57 0.00 231134.50 4181.82 232684.01 00:29:10.958 [2024-12-15T04:29:24.645Z] =================================================================================================================== 00:29:10.958 [2024-12-15T04:29:24.645Z] Total : 1888.19 118.01 886.21 0.00 202436.79 4181.82 232684.01 00:29:10.958 [2024-12-15 05:29:24.568692] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:10.958 [2024-12-15 05:29:24.568746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:29:10.958 [2024-12-15 05:29:24.568809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b2ad0 (9): Bad file descriptor 00:29:10.958 [2024-12-15 05:29:24.568825] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1715120 (9): Bad file descriptor 00:29:10.958 [2024-12-15 05:29:24.569421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-12-15 05:29:24.569449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12ad2c0 with addr=10.0.0.2, port=4420 00:29:10.958 [2024-12-15 05:29:24.569460] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12ad2c0 is same with the state(6) to be set 00:29:10.958 [2024-12-15 05:29:24.569657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-12-15 05:29:24.569668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12a6270 with addr=10.0.0.2, port=4420 00:29:10.958 [2024-12-15 05:29:24.569675] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12a6270 is same with the state(6) to be set 00:29:10.958 [2024-12-15 05:29:24.569893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-12-15 05:29:24.569904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b0690 with addr=10.0.0.2, port=4420 00:29:10.958 [2024-12-15 05:29:24.569911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b0690 is same with the state(6) to be set 00:29:10.958 [2024-12-15 05:29:24.570136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-12-15 05:29:24.570147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16e6ff0 with addr=10.0.0.2, port=4420 00:29:10.958 [2024-12-15 05:29:24.570155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16e6ff0 is same with the state(6) to be set 00:29:10.958 [2024-12-15 05:29:24.570162] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:10.958 [2024-12-15 05:29:24.570169] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:10.958 [2024-12-15 05:29:24.570178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:10.958 [2024-12-15 05:29:24.570187] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:10.958 [2024-12-15 05:29:24.570196] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:10.958 [2024-12-15 05:29:24.570203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:10.958 [2024-12-15 05:29:24.570209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:10.958 [2024-12-15 05:29:24.570219] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:10.958 [2024-12-15 05:29:24.570275] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:29:10.958 [2024-12-15 05:29:24.571155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:29:10.958 [2024-12-15 05:29:24.571211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12ad2c0 (9): Bad file descriptor 00:29:10.958 [2024-12-15 05:29:24.571223] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12a6270 (9): Bad file descriptor 00:29:10.958 [2024-12-15 05:29:24.571232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b0690 (9): Bad file descriptor 00:29:10.958 [2024-12-15 05:29:24.571240] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16e6ff0 (9): Bad file descriptor 00:29:10.958 [2024-12-15 05:29:24.571291] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:29:10.958 [2024-12-15 05:29:24.571301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:29:10.958 [2024-12-15 05:29:24.571309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:29:10.958 [2024-12-15 05:29:24.571316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:29:10.958 [2024-12-15 05:29:24.571324] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:29:10.958 [2024-12-15 05:29:24.571526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.958 [2024-12-15 05:29:24.571539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x171ccb0 with addr=10.0.0.2, port=4420 00:29:10.958 [2024-12-15 05:29:24.571546] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x171ccb0 is same with the state(6) to be set 00:29:10.958 [2024-12-15 05:29:24.571553] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:29:10.958 [2024-12-15 05:29:24.571559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:29:10.959 [2024-12-15 05:29:24.571566] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:29:10.959 [2024-12-15 05:29:24.571573] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:29:10.959 [2024-12-15 05:29:24.571580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:29:10.959 [2024-12-15 05:29:24.571585] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:29:10.959 [2024-12-15 05:29:24.571592] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:29:10.959 [2024-12-15 05:29:24.571597] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:29:10.959 [2024-12-15 05:29:24.571604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:29:10.959 [2024-12-15 05:29:24.571610] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:29:10.959 [2024-12-15 05:29:24.571616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:29:10.959 [2024-12-15 05:29:24.571622] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:29:10.959 [2024-12-15 05:29:24.571628] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:29:10.959 [2024-12-15 05:29:24.571634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:29:10.959 [2024-12-15 05:29:24.571643] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:29:10.959 [2024-12-15 05:29:24.571649] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:29:10.959 [2024-12-15 05:29:24.571819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-12-15 05:29:24.571831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12bc0b0 with addr=10.0.0.2, port=4420 00:29:10.959 [2024-12-15 05:29:24.571837] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bc0b0 is same with the state(6) to be set 00:29:10.959 [2024-12-15 05:29:24.572053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-12-15 05:29:24.572063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b4420 with addr=10.0.0.2, port=4420 00:29:10.959 [2024-12-15 05:29:24.572069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b4420 is same with the state(6) to be set 00:29:10.959 [2024-12-15 05:29:24.572234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-12-15 05:29:24.572243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1711b50 with addr=10.0.0.2, port=4420 00:29:10.959 [2024-12-15 05:29:24.572249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1711b50 is same with the state(6) to be set 00:29:10.959 [2024-12-15 05:29:24.572455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-12-15 05:29:24.572465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1715120 with addr=10.0.0.2, port=4420 00:29:10.959 [2024-12-15 05:29:24.572472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1715120 is same with the state(6) to be set 00:29:10.959 [2024-12-15 05:29:24.572560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:29:10.959 [2024-12-15 05:29:24.572570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x12b2ad0 with addr=10.0.0.2, port=4420 00:29:10.959 [2024-12-15 05:29:24.572577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12b2ad0 is same with the state(6) to be set 00:29:10.959 [2024-12-15 05:29:24.572585] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x171ccb0 (9): Bad file descriptor 00:29:10.959 [2024-12-15 05:29:24.572614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12bc0b0 (9): Bad file descriptor 00:29:10.959 [2024-12-15 05:29:24.572624] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b4420 (9): Bad file descriptor 00:29:10.959 [2024-12-15 05:29:24.572632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1711b50 (9): Bad file descriptor 00:29:10.959 [2024-12-15 05:29:24.572640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1715120 (9): Bad file descriptor 00:29:10.959 [2024-12-15 05:29:24.572648] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x12b2ad0 (9): Bad file descriptor 00:29:10.959 [2024-12-15 05:29:24.572656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:29:10.959 [2024-12-15 05:29:24.572661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:29:10.959 [2024-12-15 05:29:24.572668] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:29:10.959 [2024-12-15 05:29:24.572674] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:29:10.959 [2024-12-15 05:29:24.572695] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:29:10.959 [2024-12-15 05:29:24.572702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:29:10.959 [2024-12-15 05:29:24.572711] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:29:10.959 [2024-12-15 05:29:24.572717] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:29:10.959 [2024-12-15 05:29:24.572723] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:29:10.959 [2024-12-15 05:29:24.572729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:29:10.959 [2024-12-15 05:29:24.572735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:29:10.959 [2024-12-15 05:29:24.572741] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:29:10.959 [2024-12-15 05:29:24.572747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:29:10.959 [2024-12-15 05:29:24.572753] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:29:10.959 [2024-12-15 05:29:24.572759] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:29:10.959 [2024-12-15 05:29:24.572765] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:29:10.959 [2024-12-15 05:29:24.572771] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:29:10.959 [2024-12-15 05:29:24.572776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:29:10.959 [2024-12-15 05:29:24.572783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:29:10.959 [2024-12-15 05:29:24.572788] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:29:10.959 [2024-12-15 05:29:24.572795] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:29:10.959 [2024-12-15 05:29:24.572801] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:29:10.959 [2024-12-15 05:29:24.572806] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:29:10.959 [2024-12-15 05:29:24.572813] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:29:11.219 05:29:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 428388 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 428388 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 428388 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # sync 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set +e 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:12.599 rmmod nvme_tcp 00:29:12.599 rmmod nvme_fabrics 00:29:12.599 rmmod nvme_keyring 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@128 -- # set -e 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # return 0 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@517 -- # '[' -n 428144 ']' 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@518 -- # killprocess 428144 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 428144 ']' 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 428144 00:29:12.599 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (428144) - No such process 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 428144 is not found' 00:29:12.599 Process with pid 428144 is not found 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # iptr 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-save 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@791 -- # iptables-restore 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:12.599 05:29:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.506 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:14.506 00:29:14.506 real 0m7.122s 00:29:14.506 user 0m16.015s 00:29:14.506 sys 0m1.245s 00:29:14.506 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:14.506 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:29:14.506 ************************************ 00:29:14.506 END TEST nvmf_shutdown_tc3 00:29:14.506 ************************************ 00:29:14.506 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:29:14.506 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:29:14.506 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:14.507 ************************************ 00:29:14.507 START TEST nvmf_shutdown_tc4 00:29:14.507 ************************************ 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # xtrace_disable 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # pci_devs=() 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # net_devs=() 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # e810=() 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@320 -- # local -ga e810 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # x722=() 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # local -ga x722 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # mlx=() 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@322 -- # local -ga mlx 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:14.507 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:14.507 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:14.507 Found net devices under 0000:af:00.0: cvl_0_0 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:14.507 Found net devices under 0000:af:00.1: cvl_0_1 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@442 -- # is_hw=yes 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:14.507 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:14.508 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:14.508 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:14.508 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:14.508 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:14.508 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:14.508 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:14.508 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:14.508 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:14.508 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:14.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:14.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.388 ms 00:29:14.767 00:29:14.767 --- 10.0.0.2 ping statistics --- 00:29:14.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.767 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:14.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:14.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:29:14.767 00:29:14.767 --- 10.0.0.1 ping statistics --- 00:29:14.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:14.767 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@450 -- # return 0 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:14.767 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@509 -- # nvmfpid=429443 00:29:15.027 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@510 -- # waitforlisten 429443 00:29:15.027 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:29:15.027 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 429443 ']' 00:29:15.027 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.027 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.027 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.027 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.027 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.027 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:15.027 [2024-12-15 05:29:28.502638] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:15.027 [2024-12-15 05:29:28.502680] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.027 [2024-12-15 05:29:28.580418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:15.027 [2024-12-15 05:29:28.602430] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.027 [2024-12-15 05:29:28.602468] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.027 [2024-12-15 05:29:28.602475] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.027 [2024-12-15 05:29:28.602481] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.027 [2024-12-15 05:29:28.602486] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.027 [2024-12-15 05:29:28.603946] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:15.027 [2024-12-15 05:29:28.604056] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:15.027 [2024-12-15 05:29:28.604141] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.027 [2024-12-15 05:29:28.604142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:29:15.027 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.027 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:29:15.027 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:15.027 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:15.027 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:15.287 [2024-12-15 05:29:28.744094] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.287 05:29:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:15.287 Malloc1 00:29:15.287 [2024-12-15 05:29:28.853460] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.287 Malloc2 00:29:15.287 Malloc3 00:29:15.287 Malloc4 00:29:15.546 Malloc5 00:29:15.546 Malloc6 00:29:15.546 Malloc7 00:29:15.546 Malloc8 00:29:15.546 Malloc9 00:29:15.546 Malloc10 00:29:15.805 05:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.805 05:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:29:15.805 05:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:15.805 05:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:15.805 05:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=429682 00:29:15.805 05:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:29:15.805 05:29:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:29:15.805 [2024-12-15 05:29:29.355092] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:21.082 05:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:21.082 05:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 429443 00:29:21.082 05:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 429443 ']' 00:29:21.082 05:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 429443 00:29:21.082 05:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:29:21.082 05:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.082 05:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 429443 00:29:21.082 05:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:21.082 05:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:21.082 05:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 429443' 00:29:21.082 killing process with pid 429443 00:29:21.082 05:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 429443 00:29:21.082 05:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 429443 00:29:21.082 [2024-12-15 05:29:34.345395] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e000 is same with the state(6) to be set 00:29:21.082 [2024-12-15 05:29:34.345448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e000 is same with the state(6) to be set 00:29:21.082 [2024-12-15 05:29:34.345844] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345873] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345881] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345887] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345894] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345901] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345919] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345957] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345963] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.345969] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188d170 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.346710] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e870 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.346736] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e870 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.346744] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e870 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.346750] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e870 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.346756] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e870 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.346763] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e870 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.346769] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e870 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.346774] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e870 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.346780] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e870 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.346786] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e870 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.346792] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e870 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.347668] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f230 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.347689] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f230 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.347696] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f230 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.347702] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f230 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.347709] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f230 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.347716] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f230 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.347722] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f230 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.347728] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f230 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.348446] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e3a0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.348467] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e3a0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.348474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e3a0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.348481] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e3a0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.348487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e3a0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.348494] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188e3a0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.349318] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18900e0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.349778] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f720 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.349798] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f720 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.349805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f720 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.349812] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f720 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.349818] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f720 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.349824] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x188f720 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.356411] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18922d0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.356435] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18922d0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.356443] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18922d0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.356450] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18922d0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.356457] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18922d0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.356463] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18922d0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.356474] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18922d0 is same with the state(6) to be set 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 starting I/O failed: -6 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 starting I/O failed: -6 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 starting I/O failed: -6 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 starting I/O failed: -6 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 starting I/O failed: -6 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 starting I/O failed: -6 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 starting I/O failed: -6 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 starting I/O failed: -6 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 [2024-12-15 05:29:34.357049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18927a0 is same with the state(6) to be set 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 starting I/O failed: -6 00:29:21.083 [2024-12-15 05:29:34.357073] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18927a0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.357081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18927a0 is same with the state(6) to be set 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 [2024-12-15 05:29:34.357088] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18927a0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.357095] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18927a0 is same with the state(6) to be set 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 [2024-12-15 05:29:34.357102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18927a0 is same with the state(6) to be set 00:29:21.083 [2024-12-15 05:29:34.357109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18927a0 is same with the state(6) to be set 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 [2024-12-15 05:29:34.357115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18927a0 is same with the state(6) to be set 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 starting I/O failed: -6 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 [2024-12-15 05:29:34.357185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.083 [2024-12-15 05:29:34.357311] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c70 is same with the state(6) to be set 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 [2024-12-15 05:29:34.357337] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c70 is same with the state(6) to be set 00:29:21.083 Write completed with error (sct=0, sc=8) 00:29:21.083 [2024-12-15 05:29:34.357345] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c70 is same with starting I/O failed: -6 00:29:21.083 the state(6) to be set 00:29:21.084 [2024-12-15 05:29:34.357353] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c70 is same with the state(6) to be set 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 [2024-12-15 05:29:34.357363] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1892c70 is same with the state(6) to be set 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 [2024-12-15 05:29:34.357862] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891e00 is same with the state(6) to be set 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 [2024-12-15 05:29:34.357885] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891e00 is same with the state(6) to be set 00:29:21.084 [2024-12-15 05:29:34.357893] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891e00 is same with the state(6) to be set 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 [2024-12-15 05:29:34.357899] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891e00 is same with the state(6) to be set 00:29:21.084 starting I/O failed: -6 00:29:21.084 [2024-12-15 05:29:34.357907] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891e00 is same with the state(6) to be set 00:29:21.084 [2024-12-15 05:29:34.357913] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891e00 is same with the state(6) to be set 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 [2024-12-15 05:29:34.357925] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891e00 is same with the state(6) to be set 00:29:21.084 [2024-12-15 05:29:34.357931] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1891e00 is same with the state(6) to be set 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 [2024-12-15 05:29:34.358096] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 [2024-12-15 05:29:34.359118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.084 starting I/O failed: -6 00:29:21.084 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 [2024-12-15 05:29:34.360832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:21.085 NVMe io qpair process completion error 00:29:21.085 [2024-12-15 05:29:34.361012] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7e5a0 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7e5a0 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361039] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7e5a0 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361046] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7e5a0 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361052] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7e5a0 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361058] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7e5a0 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361064] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7e5a0 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7e5a0 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361076] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7e5a0 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7e5a0 is same with the state(6) to be set 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 [2024-12-15 05:29:34.361360] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ea90 is same with the state(6) to be set 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 [2024-12-15 05:29:34.361378] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ea90 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361385] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ea90 is same with Write completed with error (sct=0, sc=8) 00:29:21.085 the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361392] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ea90 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361398] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ea90 is same with Write completed with error (sct=0, sc=8) 00:29:21.085 the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361405] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ea90 is same with the state(6) to be set 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 [2024-12-15 05:29:34.361771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.085 [2024-12-15 05:29:34.361790] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ef60 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361804] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ef60 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361810] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ef60 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361817] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ef60 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.361823] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a7ef60 is same with the state(6) to be set 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 [2024-12-15 05:29:34.362097] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893140 is same with Write completed with error (sct=0, sc=8) 00:29:21.085 the state(6) to be set 00:29:21.085 starting I/O failed: -6 00:29:21.085 [2024-12-15 05:29:34.362115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893140 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.362122] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893140 is same with the state(6) to be set 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 [2024-12-15 05:29:34.362128] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893140 is same with the state(6) to be set 00:29:21.085 starting I/O failed: -6 00:29:21.085 [2024-12-15 05:29:34.362135] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893140 is same with the state(6) to be set 00:29:21.085 [2024-12-15 05:29:34.362142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1893140 is same with the state(6) to be set 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.085 starting I/O failed: -6 00:29:21.085 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 [2024-12-15 05:29:34.362642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 [2024-12-15 05:29:34.363642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.086 starting I/O failed: -6 00:29:21.086 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 [2024-12-15 05:29:34.365337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.087 NVMe io qpair process completion error 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 [2024-12-15 05:29:34.366318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 [2024-12-15 05:29:34.367229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 Write completed with error (sct=0, sc=8) 00:29:21.087 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 [2024-12-15 05:29:34.368212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 [2024-12-15 05:29:34.369980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.088 NVMe io qpair process completion error 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 [2024-12-15 05:29:34.371035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.088 Write completed with error (sct=0, sc=8) 00:29:21.088 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 [2024-12-15 05:29:34.371904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 [2024-12-15 05:29:34.372888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.089 starting I/O failed: -6 00:29:21.089 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 [2024-12-15 05:29:34.374456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.090 NVMe io qpair process completion error 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 starting I/O failed: -6 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.090 starting I/O failed: -6 00:29:21.090 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 [2024-12-15 05:29:34.381888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.091 NVMe io qpair process completion error 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 Write completed with error (sct=0, sc=8) 00:29:21.091 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 [2024-12-15 05:29:34.384310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 Write completed with error (sct=0, sc=8) 00:29:21.092 starting I/O failed: -6 00:29:21.092 [2024-12-15 05:29:34.386242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.092 NVMe io qpair process completion error 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.093 starting I/O failed: -6 00:29:21.093 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 [2024-12-15 05:29:34.390312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.094 NVMe io qpair process completion error 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 [2024-12-15 05:29:34.391293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 [2024-12-15 05:29:34.392196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.094 starting I/O failed: -6 00:29:21.094 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 [2024-12-15 05:29:34.393172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 [2024-12-15 05:29:34.394810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.095 NVMe io qpair process completion error 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 starting I/O failed: -6 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.095 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 [2024-12-15 05:29:34.395814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:21.096 starting I/O failed: -6 00:29:21.096 starting I/O failed: -6 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 [2024-12-15 05:29:34.396728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 [2024-12-15 05:29:34.397791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.096 Write completed with error (sct=0, sc=8) 00:29:21.096 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 [2024-12-15 05:29:34.404953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.097 NVMe io qpair process completion error 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 [2024-12-15 05:29:34.405963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 [2024-12-15 05:29:34.406841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 Write completed with error (sct=0, sc=8) 00:29:21.097 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 [2024-12-15 05:29:34.407860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 Write completed with error (sct=0, sc=8) 00:29:21.098 starting I/O failed: -6 00:29:21.098 [2024-12-15 05:29:34.410241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:29:21.098 NVMe io qpair process completion error 00:29:21.098 Initializing NVMe Controllers 00:29:21.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:29:21.098 Controller IO queue size 128, less than required. 00:29:21.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:21.098 Controller IO queue size 128, less than required. 00:29:21.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:29:21.098 Controller IO queue size 128, less than required. 00:29:21.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:29:21.098 Controller IO queue size 128, less than required. 00:29:21.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:29:21.098 Controller IO queue size 128, less than required. 00:29:21.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:29:21.098 Controller IO queue size 128, less than required. 00:29:21.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:29:21.098 Controller IO queue size 128, less than required. 00:29:21.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:29:21.098 Controller IO queue size 128, less than required. 00:29:21.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:29:21.098 Controller IO queue size 128, less than required. 00:29:21.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:29:21.098 Controller IO queue size 128, less than required. 00:29:21.098 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:21.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:29:21.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:21.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:29:21.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:29:21.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:29:21.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:29:21.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:29:21.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:29:21.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:29:21.099 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:29:21.099 Initialization complete. Launching workers. 00:29:21.099 ======================================================== 00:29:21.099 Latency(us) 00:29:21.099 Device Information : IOPS MiB/s Average min max 00:29:21.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2246.42 96.53 56984.89 884.52 105014.23 00:29:21.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2219.90 95.39 57670.12 1030.94 123573.80 00:29:21.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2252.34 96.78 56888.75 692.79 103070.89 00:29:21.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2231.52 95.89 57428.36 676.80 108174.95 00:29:21.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2233.49 95.97 57390.09 699.72 108993.73 00:29:21.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2256.07 96.94 56832.15 618.11 111071.63 00:29:21.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2192.06 94.19 58571.39 916.67 119002.16 00:29:21.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2170.57 93.27 58475.68 954.56 97299.80 00:29:21.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2156.98 92.68 58856.85 904.38 96831.51 00:29:21.099 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2153.47 92.53 58964.33 704.11 98186.14 00:29:21.099 ======================================================== 00:29:21.099 Total : 22112.83 950.16 57792.96 618.11 123573.80 00:29:21.099 00:29:21.099 [2024-12-15 05:29:34.413245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1131390 is same with the state(6) to be set 00:29:21.099 [2024-12-15 05:29:34.413292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1130070 is same with the state(6) to be set 00:29:21.099 [2024-12-15 05:29:34.413335] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11303a0 is same with the state(6) to be set 00:29:21.099 [2024-12-15 05:29:34.413369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11306d0 is same with the state(6) to be set 00:29:21.099 [2024-12-15 05:29:34.413398] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11c1f00 is same with the state(6) to be set 00:29:21.099 [2024-12-15 05:29:34.413427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1131060 is same with the state(6) to be set 00:29:21.099 [2024-12-15 05:29:34.413455] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcff0 is same with the state(6) to be set 00:29:21.099 [2024-12-15 05:29:34.413483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1130d30 is same with the state(6) to be set 00:29:21.099 [2024-12-15 05:29:34.413513] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11316c0 is same with the state(6) to be set 00:29:21.099 [2024-12-15 05:29:34.413542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1130a00 is same with the state(6) to be set 00:29:21.099 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:21.099 05:29:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:29:22.036 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 429682 00:29:22.036 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:29:22.036 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 429682 00:29:22.036 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:29:22.036 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.036 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:29:22.036 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:22.036 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 429682 00:29:22.036 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@121 -- # sync 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@124 -- # set +e 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:22.295 rmmod nvme_tcp 00:29:22.295 rmmod nvme_fabrics 00:29:22.295 rmmod nvme_keyring 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@128 -- # set -e 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # return 0 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@517 -- # '[' -n 429443 ']' 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@518 -- # killprocess 429443 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 429443 ']' 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 429443 00:29:22.295 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (429443) - No such process 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 429443 is not found' 00:29:22.295 Process with pid 429443 is not found 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@297 -- # iptr 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-save 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@791 -- # iptables-restore 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:22.295 05:29:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.201 05:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:24.201 00:29:24.201 real 0m9.748s 00:29:24.201 user 0m25.086s 00:29:24.201 sys 0m4.945s 00:29:24.201 05:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.201 05:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:29:24.201 ************************************ 00:29:24.201 END TEST nvmf_shutdown_tc4 00:29:24.201 ************************************ 00:29:24.460 05:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:29:24.460 00:29:24.460 real 0m39.780s 00:29:24.460 user 1m36.748s 00:29:24.460 sys 0m13.463s 00:29:24.460 05:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.460 05:29:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:24.460 ************************************ 00:29:24.460 END TEST nvmf_shutdown 00:29:24.460 ************************************ 00:29:24.461 05:29:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:24.461 05:29:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:24.461 05:29:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.461 05:29:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:24.461 ************************************ 00:29:24.461 START TEST nvmf_nsid 00:29:24.461 ************************************ 00:29:24.461 05:29:37 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:29:24.461 * Looking for test storage... 00:29:24.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:24.461 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:24.461 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lcov --version 00:29:24.461 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.720 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:24.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.721 --rc genhtml_branch_coverage=1 00:29:24.721 --rc genhtml_function_coverage=1 00:29:24.721 --rc genhtml_legend=1 00:29:24.721 --rc geninfo_all_blocks=1 00:29:24.721 --rc geninfo_unexecuted_blocks=1 00:29:24.721 00:29:24.721 ' 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:24.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.721 --rc genhtml_branch_coverage=1 00:29:24.721 --rc genhtml_function_coverage=1 00:29:24.721 --rc genhtml_legend=1 00:29:24.721 --rc geninfo_all_blocks=1 00:29:24.721 --rc geninfo_unexecuted_blocks=1 00:29:24.721 00:29:24.721 ' 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:24.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.721 --rc genhtml_branch_coverage=1 00:29:24.721 --rc genhtml_function_coverage=1 00:29:24.721 --rc genhtml_legend=1 00:29:24.721 --rc geninfo_all_blocks=1 00:29:24.721 --rc geninfo_unexecuted_blocks=1 00:29:24.721 00:29:24.721 ' 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:24.721 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.721 --rc genhtml_branch_coverage=1 00:29:24.721 --rc genhtml_function_coverage=1 00:29:24.721 --rc genhtml_legend=1 00:29:24.721 --rc geninfo_all_blocks=1 00:29:24.721 --rc geninfo_unexecuted_blocks=1 00:29:24.721 00:29:24.721 ' 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # : 0 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:24.721 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # xtrace_disable 00:29:24.721 05:29:38 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # pci_devs=() 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # net_devs=() 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # e810=() 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@320 -- # local -ga e810 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # x722=() 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # local -ga x722 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # mlx=() 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@322 -- # local -ga mlx 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:31.291 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:31.291 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:31.291 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:31.292 Found net devices under 0000:af:00.0: cvl_0_0 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:31.292 Found net devices under 0000:af:00.1: cvl_0_1 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@442 -- # is_hw=yes 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:31.292 05:29:43 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:31.292 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:31.292 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.412 ms 00:29:31.292 00:29:31.292 --- 10.0.0.2 ping statistics --- 00:29:31.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.292 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:31.292 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:31.292 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:29:31.292 00:29:31.292 --- 10.0.0.1 ping statistics --- 00:29:31.292 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:31.292 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@450 -- # return 0 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@509 -- # nvmfpid=434062 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@510 -- # waitforlisten 434062 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 434062 ']' 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:31.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:31.292 [2024-12-15 05:29:44.157790] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:31.292 [2024-12-15 05:29:44.157843] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:31.292 [2024-12-15 05:29:44.238087] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.292 [2024-12-15 05:29:44.260273] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:31.292 [2024-12-15 05:29:44.260310] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:31.292 [2024-12-15 05:29:44.260317] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:31.292 [2024-12-15 05:29:44.260323] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:31.292 [2024-12-15 05:29:44.260327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:31.292 [2024-12-15 05:29:44.260812] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=434193 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # get_main_ns_ip 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@769 -- # local ip 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # ip_candidates=() 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@770 -- # local -A ip_candidates 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:29:31.292 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=aba26536-2204-42f5-a806-046b9589e9b7 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=a2ec4582-752d-43d1-b949-1920758a6763 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=8950f060-6f9c-4c51-b0d5-8922c2341e54 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:31.293 null0 00:29:31.293 null1 00:29:31.293 [2024-12-15 05:29:44.439144] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:31.293 [2024-12-15 05:29:44.439188] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid434193 ] 00:29:31.293 null2 00:29:31.293 [2024-12-15 05:29:44.443283] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.293 [2024-12-15 05:29:44.467470] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 434193 /var/tmp/tgt2.sock 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 434193 ']' 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:29:31.293 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:31.293 [2024-12-15 05:29:44.511711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:31.293 [2024-12-15 05:29:44.534338] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:29:31.293 05:29:44 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:29:31.550 [2024-12-15 05:29:45.030303] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:31.550 [2024-12-15 05:29:45.046389] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:29:31.550 nvme0n1 nvme0n2 00:29:31.550 nvme1n1 00:29:31.550 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:29:31.550 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:29:31.550 05:29:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:32.483 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:29:32.483 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:29:32.483 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:29:32.483 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:29:32.483 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:29:32.483 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:29:32.483 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:29:32.483 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:32.483 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:32.483 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:32.483 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:29:32.483 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:29:32.483 05:29:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid aba26536-2204-42f5-a806-046b9589e9b7 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=aba26536220442f5a806046b9589e9b7 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo ABA26536220442F5A806046B9589E9B7 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ ABA26536220442F5A806046B9589E9B7 == \A\B\A\2\6\5\3\6\2\2\0\4\4\2\F\5\A\8\0\6\0\4\6\B\9\5\8\9\E\9\B\7 ]] 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid a2ec4582-752d-43d1-b949-1920758a6763 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=a2ec4582752d43d1b9491920758a6763 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo A2EC4582752D43D1B9491920758A6763 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ A2EC4582752D43D1B9491920758A6763 == \A\2\E\C\4\5\8\2\7\5\2\D\4\3\D\1\B\9\4\9\1\9\2\0\7\5\8\A\6\7\6\3 ]] 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 8950f060-6f9c-4c51-b0d5-8922c2341e54 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@787 -- # tr -d - 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8950f0606f9c4c51b0d58922c2341e54 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8950F0606F9C4C51B0D58922C2341E54 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 8950F0606F9C4C51B0D58922C2341E54 == \8\9\5\0\F\0\6\0\6\F\9\C\4\C\5\1\B\0\D\5\8\9\2\2\C\2\3\4\1\E\5\4 ]] 00:29:33.859 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:29:34.118 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:29:34.118 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:29:34.118 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 434193 00:29:34.118 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 434193 ']' 00:29:34.118 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 434193 00:29:34.118 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:34.118 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:34.118 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434193 00:29:34.118 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:34.118 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:34.118 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434193' 00:29:34.118 killing process with pid 434193 00:29:34.118 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 434193 00:29:34.118 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 434193 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@121 -- # sync 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@124 -- # set +e 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:34.377 rmmod nvme_tcp 00:29:34.377 rmmod nvme_fabrics 00:29:34.377 rmmod nvme_keyring 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@128 -- # set -e 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # return 0 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@517 -- # '[' -n 434062 ']' 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@518 -- # killprocess 434062 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 434062 ']' 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 434062 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:34.377 05:29:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 434062 00:29:34.377 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:34.377 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:34.377 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 434062' 00:29:34.377 killing process with pid 434062 00:29:34.377 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 434062 00:29:34.377 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 434062 00:29:34.636 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:34.636 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:34.636 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:34.636 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@297 -- # iptr 00:29:34.636 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-save 00:29:34.636 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:34.636 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@791 -- # iptables-restore 00:29:34.636 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:34.636 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:34.636 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:34.636 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:34.636 05:29:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.172 05:29:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:37.172 00:29:37.172 real 0m12.264s 00:29:37.172 user 0m9.540s 00:29:37.172 sys 0m5.413s 00:29:37.172 05:29:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:37.172 05:29:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:29:37.172 ************************************ 00:29:37.172 END TEST nvmf_nsid 00:29:37.172 ************************************ 00:29:37.172 05:29:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:37.172 00:29:37.172 real 18m34.267s 00:29:37.172 user 49m18.217s 00:29:37.172 sys 4m31.824s 00:29:37.172 05:29:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:37.172 05:29:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:29:37.172 ************************************ 00:29:37.172 END TEST nvmf_target_extra 00:29:37.172 ************************************ 00:29:37.172 05:29:50 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:37.172 05:29:50 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:37.172 05:29:50 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:37.172 05:29:50 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:29:37.172 ************************************ 00:29:37.172 START TEST nvmf_host 00:29:37.172 ************************************ 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:29:37.172 * Looking for test storage... 00:29:37.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lcov --version 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:37.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.172 --rc genhtml_branch_coverage=1 00:29:37.172 --rc genhtml_function_coverage=1 00:29:37.172 --rc genhtml_legend=1 00:29:37.172 --rc geninfo_all_blocks=1 00:29:37.172 --rc geninfo_unexecuted_blocks=1 00:29:37.172 00:29:37.172 ' 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:37.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.172 --rc genhtml_branch_coverage=1 00:29:37.172 --rc genhtml_function_coverage=1 00:29:37.172 --rc genhtml_legend=1 00:29:37.172 --rc geninfo_all_blocks=1 00:29:37.172 --rc geninfo_unexecuted_blocks=1 00:29:37.172 00:29:37.172 ' 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:37.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.172 --rc genhtml_branch_coverage=1 00:29:37.172 --rc genhtml_function_coverage=1 00:29:37.172 --rc genhtml_legend=1 00:29:37.172 --rc geninfo_all_blocks=1 00:29:37.172 --rc geninfo_unexecuted_blocks=1 00:29:37.172 00:29:37.172 ' 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:37.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.172 --rc genhtml_branch_coverage=1 00:29:37.172 --rc genhtml_function_coverage=1 00:29:37.172 --rc genhtml_legend=1 00:29:37.172 --rc geninfo_all_blocks=1 00:29:37.172 --rc geninfo_unexecuted_blocks=1 00:29:37.172 00:29:37.172 ' 00:29:37.172 05:29:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # : 0 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:37.173 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:37.173 ************************************ 00:29:37.173 START TEST nvmf_multicontroller 00:29:37.173 ************************************ 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:29:37.173 * Looking for test storage... 00:29:37.173 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lcov --version 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.173 --rc genhtml_branch_coverage=1 00:29:37.173 --rc genhtml_function_coverage=1 00:29:37.173 --rc genhtml_legend=1 00:29:37.173 --rc geninfo_all_blocks=1 00:29:37.173 --rc geninfo_unexecuted_blocks=1 00:29:37.173 00:29:37.173 ' 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.173 --rc genhtml_branch_coverage=1 00:29:37.173 --rc genhtml_function_coverage=1 00:29:37.173 --rc genhtml_legend=1 00:29:37.173 --rc geninfo_all_blocks=1 00:29:37.173 --rc geninfo_unexecuted_blocks=1 00:29:37.173 00:29:37.173 ' 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.173 --rc genhtml_branch_coverage=1 00:29:37.173 --rc genhtml_function_coverage=1 00:29:37.173 --rc genhtml_legend=1 00:29:37.173 --rc geninfo_all_blocks=1 00:29:37.173 --rc geninfo_unexecuted_blocks=1 00:29:37.173 00:29:37.173 ' 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:37.173 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:37.173 --rc genhtml_branch_coverage=1 00:29:37.173 --rc genhtml_function_coverage=1 00:29:37.173 --rc genhtml_legend=1 00:29:37.173 --rc geninfo_all_blocks=1 00:29:37.173 --rc geninfo_unexecuted_blocks=1 00:29:37.173 00:29:37.173 ' 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:37.173 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # : 0 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:37.174 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # xtrace_disable 00:29:37.174 05:29:50 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # pci_devs=() 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # net_devs=() 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # e810=() 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # local -ga e810 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # x722=() 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # local -ga x722 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # mlx=() 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@322 -- # local -ga mlx 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:43.744 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:43.744 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:43.744 Found net devices under 0000:af:00.0: cvl_0_0 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:43.744 Found net devices under 0000:af:00.1: cvl_0_1 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@442 -- # is_hw=yes 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:43.744 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:43.744 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:43.744 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.364 ms 00:29:43.745 00:29:43.745 --- 10.0.0.2 ping statistics --- 00:29:43.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.745 rtt min/avg/max/mdev = 0.364/0.364/0.364/0.000 ms 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:43.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:43.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:29:43.745 00:29:43.745 --- 10.0.0.1 ping statistics --- 00:29:43.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:43.745 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # return 0 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@509 -- # nvmfpid=438310 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@510 -- # waitforlisten 438310 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 438310 ']' 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:43.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 [2024-12-15 05:29:56.755786] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:43.745 [2024-12-15 05:29:56.755836] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:43.745 [2024-12-15 05:29:56.831520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:43.745 [2024-12-15 05:29:56.854242] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:43.745 [2024-12-15 05:29:56.854281] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:43.745 [2024-12-15 05:29:56.854289] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:43.745 [2024-12-15 05:29:56.854296] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:43.745 [2024-12-15 05:29:56.854301] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:43.745 [2024-12-15 05:29:56.855662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:43.745 [2024-12-15 05:29:56.855767] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.745 [2024-12-15 05:29:56.855769] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 [2024-12-15 05:29:56.991139] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.745 05:29:56 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 Malloc0 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 [2024-12-15 05:29:57.050313] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 [2024-12-15 05:29:57.062258] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 Malloc1 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=438388 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 438388 /var/tmp/bdevperf.sock 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 438388 ']' 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:43.745 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:43.745 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.746 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:29:43.746 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:43.746 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:43.746 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.005 NVMe0n1 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.005 1 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.005 request: 00:29:44.005 { 00:29:44.005 "name": "NVMe0", 00:29:44.005 "trtype": "tcp", 00:29:44.005 "traddr": "10.0.0.2", 00:29:44.005 "adrfam": "ipv4", 00:29:44.005 "trsvcid": "4420", 00:29:44.005 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.005 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:29:44.005 "hostaddr": "10.0.0.1", 00:29:44.005 "prchk_reftag": false, 00:29:44.005 "prchk_guard": false, 00:29:44.005 "hdgst": false, 00:29:44.005 "ddgst": false, 00:29:44.005 "allow_unrecognized_csi": false, 00:29:44.005 "method": "bdev_nvme_attach_controller", 00:29:44.005 "req_id": 1 00:29:44.005 } 00:29:44.005 Got JSON-RPC error response 00:29:44.005 response: 00:29:44.005 { 00:29:44.005 "code": -114, 00:29:44.005 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:44.005 } 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.005 request: 00:29:44.005 { 00:29:44.005 "name": "NVMe0", 00:29:44.005 "trtype": "tcp", 00:29:44.005 "traddr": "10.0.0.2", 00:29:44.005 "adrfam": "ipv4", 00:29:44.005 "trsvcid": "4420", 00:29:44.005 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:29:44.005 "hostaddr": "10.0.0.1", 00:29:44.005 "prchk_reftag": false, 00:29:44.005 "prchk_guard": false, 00:29:44.005 "hdgst": false, 00:29:44.005 "ddgst": false, 00:29:44.005 "allow_unrecognized_csi": false, 00:29:44.005 "method": "bdev_nvme_attach_controller", 00:29:44.005 "req_id": 1 00:29:44.005 } 00:29:44.005 Got JSON-RPC error response 00:29:44.005 response: 00:29:44.005 { 00:29:44.005 "code": -114, 00:29:44.005 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:44.005 } 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:44.005 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.006 request: 00:29:44.006 { 00:29:44.006 "name": "NVMe0", 00:29:44.006 "trtype": "tcp", 00:29:44.006 "traddr": "10.0.0.2", 00:29:44.006 "adrfam": "ipv4", 00:29:44.006 "trsvcid": "4420", 00:29:44.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.006 "hostaddr": "10.0.0.1", 00:29:44.006 "prchk_reftag": false, 00:29:44.006 "prchk_guard": false, 00:29:44.006 "hdgst": false, 00:29:44.006 "ddgst": false, 00:29:44.006 "multipath": "disable", 00:29:44.006 "allow_unrecognized_csi": false, 00:29:44.006 "method": "bdev_nvme_attach_controller", 00:29:44.006 "req_id": 1 00:29:44.006 } 00:29:44.006 Got JSON-RPC error response 00:29:44.006 response: 00:29:44.006 { 00:29:44.006 "code": -114, 00:29:44.006 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:29:44.006 } 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.006 request: 00:29:44.006 { 00:29:44.006 "name": "NVMe0", 00:29:44.006 "trtype": "tcp", 00:29:44.006 "traddr": "10.0.0.2", 00:29:44.006 "adrfam": "ipv4", 00:29:44.006 "trsvcid": "4420", 00:29:44.006 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:44.006 "hostaddr": "10.0.0.1", 00:29:44.006 "prchk_reftag": false, 00:29:44.006 "prchk_guard": false, 00:29:44.006 "hdgst": false, 00:29:44.006 "ddgst": false, 00:29:44.006 "multipath": "failover", 00:29:44.006 "allow_unrecognized_csi": false, 00:29:44.006 "method": "bdev_nvme_attach_controller", 00:29:44.006 "req_id": 1 00:29:44.006 } 00:29:44.006 Got JSON-RPC error response 00:29:44.006 response: 00:29:44.006 { 00:29:44.006 "code": -114, 00:29:44.006 "message": "A controller named NVMe0 already exists with the specified network path" 00:29:44.006 } 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.006 NVMe0n1 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.006 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.265 00:29:44.265 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.265 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:29:44.265 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:29:44.265 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:44.265 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:44.265 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:44.265 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:29:44.265 05:29:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:45.642 { 00:29:45.642 "results": [ 00:29:45.642 { 00:29:45.642 "job": "NVMe0n1", 00:29:45.642 "core_mask": "0x1", 00:29:45.642 "workload": "write", 00:29:45.642 "status": "finished", 00:29:45.642 "queue_depth": 128, 00:29:45.642 "io_size": 4096, 00:29:45.642 "runtime": 1.003814, 00:29:45.642 "iops": 25377.211316040622, 00:29:45.642 "mibps": 99.12973170328368, 00:29:45.642 "io_failed": 0, 00:29:45.642 "io_timeout": 0, 00:29:45.642 "avg_latency_us": 5036.963188909701, 00:29:45.642 "min_latency_us": 2995.9314285714286, 00:29:45.642 "max_latency_us": 8800.548571428571 00:29:45.642 } 00:29:45.642 ], 00:29:45.642 "core_count": 1 00:29:45.642 } 00:29:45.642 05:29:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:29:45.642 05:29:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.642 05:29:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:45.642 05:29:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.642 05:29:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # [[ -n '' ]] 00:29:45.642 05:29:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # killprocess 438388 00:29:45.642 05:29:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 438388 ']' 00:29:45.642 05:29:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 438388 00:29:45.642 05:29:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:45.642 05:29:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.642 05:29:58 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 438388 00:29:45.642 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:45.642 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:45.642 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 438388' 00:29:45.642 killing process with pid 438388 00:29:45.642 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 438388 00:29:45.642 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 438388 00:29:45.642 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:45.642 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.642 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@123 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:29:45.643 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:45.643 [2024-12-15 05:29:57.167168] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:45.643 [2024-12-15 05:29:57.167213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid438388 ] 00:29:45.643 [2024-12-15 05:29:57.242520] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.643 [2024-12-15 05:29:57.264846] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.643 [2024-12-15 05:29:57.836939] bdev.c:4957:bdev_name_add: *ERROR*: Bdev name 9763f871-5873-4418-b522-949e0e58659b already exists 00:29:45.643 [2024-12-15 05:29:57.836963] bdev.c:8177:bdev_register: *ERROR*: Unable to add uuid:9763f871-5873-4418-b522-949e0e58659b alias for bdev NVMe1n1 00:29:45.643 [2024-12-15 05:29:57.836971] bdev_nvme.c:4666:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:29:45.643 Running I/O for 1 seconds... 00:29:45.643 25346.00 IOPS, 99.01 MiB/s 00:29:45.643 Latency(us) 00:29:45.643 [2024-12-15T04:29:59.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.643 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:29:45.643 NVMe0n1 : 1.00 25377.21 99.13 0.00 0.00 5036.96 2995.93 8800.55 00:29:45.643 [2024-12-15T04:29:59.330Z] =================================================================================================================== 00:29:45.643 [2024-12-15T04:29:59.330Z] Total : 25377.21 99.13 0.00 0.00 5036.96 2995.93 8800.55 00:29:45.643 Received shutdown signal, test time was about 1.000000 seconds 00:29:45.643 00:29:45.643 Latency(us) 00:29:45.643 [2024-12-15T04:29:59.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:45.643 [2024-12-15T04:29:59.330Z] =================================================================================================================== 00:29:45.643 [2024-12-15T04:29:59.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:45.643 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@124 -- # nvmftestfini 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # sync 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set +e 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:45.643 rmmod nvme_tcp 00:29:45.643 rmmod nvme_fabrics 00:29:45.643 rmmod nvme_keyring 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@128 -- # set -e 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # return 0 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@517 -- # '[' -n 438310 ']' 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@518 -- # killprocess 438310 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 438310 ']' 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 438310 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:45.643 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 438310 00:29:45.902 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:45.902 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:45.902 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 438310' 00:29:45.902 killing process with pid 438310 00:29:45.902 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 438310 00:29:45.902 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 438310 00:29:45.903 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:45.903 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:45.903 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:45.903 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # iptr 00:29:45.903 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-save 00:29:45.903 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:45.903 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@791 -- # iptables-restore 00:29:45.903 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:45.903 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:45.903 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:45.903 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:45.903 05:29:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.438 05:30:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:48.438 00:29:48.438 real 0m11.003s 00:29:48.438 user 0m11.997s 00:29:48.438 sys 0m5.141s 00:29:48.438 05:30:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:48.438 05:30:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:29:48.438 ************************************ 00:29:48.438 END TEST nvmf_multicontroller 00:29:48.438 ************************************ 00:29:48.438 05:30:01 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:48.438 05:30:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:48.438 05:30:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:48.438 05:30:01 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:48.438 ************************************ 00:29:48.438 START TEST nvmf_aer 00:29:48.438 ************************************ 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:29:48.439 * Looking for test storage... 00:29:48.439 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lcov --version 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:48.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.439 --rc genhtml_branch_coverage=1 00:29:48.439 --rc genhtml_function_coverage=1 00:29:48.439 --rc genhtml_legend=1 00:29:48.439 --rc geninfo_all_blocks=1 00:29:48.439 --rc geninfo_unexecuted_blocks=1 00:29:48.439 00:29:48.439 ' 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:48.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.439 --rc genhtml_branch_coverage=1 00:29:48.439 --rc genhtml_function_coverage=1 00:29:48.439 --rc genhtml_legend=1 00:29:48.439 --rc geninfo_all_blocks=1 00:29:48.439 --rc geninfo_unexecuted_blocks=1 00:29:48.439 00:29:48.439 ' 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:48.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.439 --rc genhtml_branch_coverage=1 00:29:48.439 --rc genhtml_function_coverage=1 00:29:48.439 --rc genhtml_legend=1 00:29:48.439 --rc geninfo_all_blocks=1 00:29:48.439 --rc geninfo_unexecuted_blocks=1 00:29:48.439 00:29:48.439 ' 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:48.439 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:48.439 --rc genhtml_branch_coverage=1 00:29:48.439 --rc genhtml_function_coverage=1 00:29:48.439 --rc genhtml_legend=1 00:29:48.439 --rc geninfo_all_blocks=1 00:29:48.439 --rc geninfo_unexecuted_blocks=1 00:29:48.439 00:29:48.439 ' 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # : 0 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:48.439 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:48.439 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:48.440 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # xtrace_disable 00:29:48.440 05:30:01 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # pci_devs=() 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # local -a pci_devs 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # pci_net_devs=() 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # pci_drivers=() 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # local -A pci_drivers 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # net_devs=() 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@319 -- # local -ga net_devs 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # e810=() 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # local -ga e810 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # x722=() 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # local -ga x722 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # mlx=() 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@322 -- # local -ga mlx 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:29:55.008 Found 0000:af:00.0 (0x8086 - 0x159b) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:29:55.008 Found 0000:af:00.1 (0x8086 - 0x159b) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.008 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:29:55.009 Found net devices under 0000:af:00.0: cvl_0_0 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # [[ up == up ]] 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:29:55.009 Found net devices under 0000:af:00.1: cvl_0_1 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@442 -- # is_hw=yes 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:29:55.009 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:55.009 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.365 ms 00:29:55.009 00:29:55.009 --- 10.0.0.2 ping statistics --- 00:29:55.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.009 rtt min/avg/max/mdev = 0.365/0.365/0.365/0.000 ms 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:55.009 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:55.009 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:29:55.009 00:29:55.009 --- 10.0.0.1 ping statistics --- 00:29:55.009 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:55.009 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # return 0 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@509 -- # nvmfpid=442754 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@510 -- # waitforlisten 442754 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 442754 ']' 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:55.009 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:55.009 05:30:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.009 [2024-12-15 05:30:07.868200] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:29:55.009 [2024-12-15 05:30:07.868249] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:55.009 [2024-12-15 05:30:07.947715] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:55.009 [2024-12-15 05:30:07.972024] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:55.009 [2024-12-15 05:30:07.972059] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:55.009 [2024-12-15 05:30:07.972066] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:55.009 [2024-12-15 05:30:07.972072] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:55.009 [2024-12-15 05:30:07.972077] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:55.009 [2024-12-15 05:30:07.973408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:55.009 [2024-12-15 05:30:07.973515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:29:55.009 [2024-12-15 05:30:07.973540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:55.009 [2024-12-15 05:30:07.973540] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.009 [2024-12-15 05:30:08.101273] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.009 Malloc0 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.009 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.010 [2024-12-15 05:30:08.163436] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.010 [ 00:29:55.010 { 00:29:55.010 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:55.010 "subtype": "Discovery", 00:29:55.010 "listen_addresses": [], 00:29:55.010 "allow_any_host": true, 00:29:55.010 "hosts": [] 00:29:55.010 }, 00:29:55.010 { 00:29:55.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:55.010 "subtype": "NVMe", 00:29:55.010 "listen_addresses": [ 00:29:55.010 { 00:29:55.010 "trtype": "TCP", 00:29:55.010 "adrfam": "IPv4", 00:29:55.010 "traddr": "10.0.0.2", 00:29:55.010 "trsvcid": "4420" 00:29:55.010 } 00:29:55.010 ], 00:29:55.010 "allow_any_host": true, 00:29:55.010 "hosts": [], 00:29:55.010 "serial_number": "SPDK00000000000001", 00:29:55.010 "model_number": "SPDK bdev Controller", 00:29:55.010 "max_namespaces": 2, 00:29:55.010 "min_cntlid": 1, 00:29:55.010 "max_cntlid": 65519, 00:29:55.010 "namespaces": [ 00:29:55.010 { 00:29:55.010 "nsid": 1, 00:29:55.010 "bdev_name": "Malloc0", 00:29:55.010 "name": "Malloc0", 00:29:55.010 "nguid": "A45DAC3791F04971AC28674AECBD4304", 00:29:55.010 "uuid": "a45dac37-91f0-4971-ac28-674aecbd4304" 00:29:55.010 } 00:29:55.010 ] 00:29:55.010 } 00:29:55.010 ] 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=442799 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.010 Malloc1 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.010 Asynchronous Event Request test 00:29:55.010 Attaching to 10.0.0.2 00:29:55.010 Attached to 10.0.0.2 00:29:55.010 Registering asynchronous event callbacks... 00:29:55.010 Starting namespace attribute notice tests for all controllers... 00:29:55.010 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:29:55.010 aer_cb - Changed Namespace 00:29:55.010 Cleaning up... 00:29:55.010 [ 00:29:55.010 { 00:29:55.010 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:29:55.010 "subtype": "Discovery", 00:29:55.010 "listen_addresses": [], 00:29:55.010 "allow_any_host": true, 00:29:55.010 "hosts": [] 00:29:55.010 }, 00:29:55.010 { 00:29:55.010 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:29:55.010 "subtype": "NVMe", 00:29:55.010 "listen_addresses": [ 00:29:55.010 { 00:29:55.010 "trtype": "TCP", 00:29:55.010 "adrfam": "IPv4", 00:29:55.010 "traddr": "10.0.0.2", 00:29:55.010 "trsvcid": "4420" 00:29:55.010 } 00:29:55.010 ], 00:29:55.010 "allow_any_host": true, 00:29:55.010 "hosts": [], 00:29:55.010 "serial_number": "SPDK00000000000001", 00:29:55.010 "model_number": "SPDK bdev Controller", 00:29:55.010 "max_namespaces": 2, 00:29:55.010 "min_cntlid": 1, 00:29:55.010 "max_cntlid": 65519, 00:29:55.010 "namespaces": [ 00:29:55.010 { 00:29:55.010 "nsid": 1, 00:29:55.010 "bdev_name": "Malloc0", 00:29:55.010 "name": "Malloc0", 00:29:55.010 "nguid": "A45DAC3791F04971AC28674AECBD4304", 00:29:55.010 "uuid": "a45dac37-91f0-4971-ac28-674aecbd4304" 00:29:55.010 }, 00:29:55.010 { 00:29:55.010 "nsid": 2, 00:29:55.010 "bdev_name": "Malloc1", 00:29:55.010 "name": "Malloc1", 00:29:55.010 "nguid": "D4F91EA3FA344F408F7AE2BEE3DCB9DD", 00:29:55.010 "uuid": "d4f91ea3-fa34-4f40-8f7a-e2bee3dcb9dd" 00:29:55.010 } 00:29:55.010 ] 00:29:55.010 } 00:29:55.010 ] 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 442799 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@516 -- # nvmfcleanup 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # sync 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set +e 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # for i in {1..20} 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:29:55.010 rmmod nvme_tcp 00:29:55.010 rmmod nvme_fabrics 00:29:55.010 rmmod nvme_keyring 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@128 -- # set -e 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # return 0 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@517 -- # '[' -n 442754 ']' 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@518 -- # killprocess 442754 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 442754 ']' 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 442754 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:29:55.010 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 442754 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 442754' 00:29:55.269 killing process with pid 442754 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 442754 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 442754 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # iptr 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-save 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@791 -- # iptables-restore 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # remove_spdk_ns 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:55.269 05:30:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.803 05:30:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:29:57.803 00:29:57.804 real 0m9.276s 00:29:57.804 user 0m5.399s 00:29:57.804 sys 0m4.893s 00:29:57.804 05:30:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:57.804 05:30:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:29:57.804 ************************************ 00:29:57.804 END TEST nvmf_aer 00:29:57.804 ************************************ 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:29:57.804 ************************************ 00:29:57.804 START TEST nvmf_async_init 00:29:57.804 ************************************ 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:29:57.804 * Looking for test storage... 00:29:57.804 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lcov --version 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:57.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.804 --rc genhtml_branch_coverage=1 00:29:57.804 --rc genhtml_function_coverage=1 00:29:57.804 --rc genhtml_legend=1 00:29:57.804 --rc geninfo_all_blocks=1 00:29:57.804 --rc geninfo_unexecuted_blocks=1 00:29:57.804 00:29:57.804 ' 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:57.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.804 --rc genhtml_branch_coverage=1 00:29:57.804 --rc genhtml_function_coverage=1 00:29:57.804 --rc genhtml_legend=1 00:29:57.804 --rc geninfo_all_blocks=1 00:29:57.804 --rc geninfo_unexecuted_blocks=1 00:29:57.804 00:29:57.804 ' 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:57.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.804 --rc genhtml_branch_coverage=1 00:29:57.804 --rc genhtml_function_coverage=1 00:29:57.804 --rc genhtml_legend=1 00:29:57.804 --rc geninfo_all_blocks=1 00:29:57.804 --rc geninfo_unexecuted_blocks=1 00:29:57.804 00:29:57.804 ' 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:57.804 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:57.804 --rc genhtml_branch_coverage=1 00:29:57.804 --rc genhtml_function_coverage=1 00:29:57.804 --rc genhtml_legend=1 00:29:57.804 --rc geninfo_all_blocks=1 00:29:57.804 --rc geninfo_unexecuted_blocks=1 00:29:57.804 00:29:57.804 ' 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # : 0 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:57.804 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:29:57.805 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@55 -- # have_pci_nics=0 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=8cbdf24a8a5d4d3b866e3166b4711dc5 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@476 -- # prepare_net_devs 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@438 -- # local -g is_hw=no 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@440 -- # remove_spdk_ns 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # xtrace_disable 00:29:57.805 05:30:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # pci_devs=() 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # net_devs=() 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # e810=() 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # local -ga e810 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # x722=() 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # local -ga x722 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # mlx=() 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@322 -- # local -ga mlx 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:04.373 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:04.374 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:04.374 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:04.374 Found net devices under 0000:af:00.0: cvl_0_0 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:04.374 Found net devices under 0000:af:00.1: cvl_0_1 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@442 -- # is_hw=yes 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:04.374 05:30:16 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:04.374 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:04.374 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.373 ms 00:30:04.374 00:30:04.374 --- 10.0.0.2 ping statistics --- 00:30:04.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.374 rtt min/avg/max/mdev = 0.373/0.373/0.373/0.000 ms 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:04.374 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:04.374 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.118 ms 00:30:04.374 00:30:04.374 --- 10.0.0.1 ping statistics --- 00:30:04.374 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:04.374 rtt min/avg/max/mdev = 0.118/0.118/0.118/0.000 ms 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # return 0 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@509 -- # nvmfpid=446357 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@510 -- # waitforlisten 446357 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 446357 ']' 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:04.374 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:04.374 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.374 [2024-12-15 05:30:17.248126] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:04.374 [2024-12-15 05:30:17.248170] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:04.374 [2024-12-15 05:30:17.325866] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:04.374 [2024-12-15 05:30:17.347155] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:04.374 [2024-12-15 05:30:17.347191] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:04.374 [2024-12-15 05:30:17.347198] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:04.374 [2024-12-15 05:30:17.347204] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:04.375 [2024-12-15 05:30:17.347209] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:04.375 [2024-12-15 05:30:17.347701] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.375 [2024-12-15 05:30:17.477964] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.375 null0 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 8cbdf24a8a5d4d3b866e3166b4711dc5 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.375 [2024-12-15 05:30:17.530216] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.375 nvme0n1 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.375 [ 00:30:04.375 { 00:30:04.375 "name": "nvme0n1", 00:30:04.375 "aliases": [ 00:30:04.375 "8cbdf24a-8a5d-4d3b-866e-3166b4711dc5" 00:30:04.375 ], 00:30:04.375 "product_name": "NVMe disk", 00:30:04.375 "block_size": 512, 00:30:04.375 "num_blocks": 2097152, 00:30:04.375 "uuid": "8cbdf24a-8a5d-4d3b-866e-3166b4711dc5", 00:30:04.375 "numa_id": 1, 00:30:04.375 "assigned_rate_limits": { 00:30:04.375 "rw_ios_per_sec": 0, 00:30:04.375 "rw_mbytes_per_sec": 0, 00:30:04.375 "r_mbytes_per_sec": 0, 00:30:04.375 "w_mbytes_per_sec": 0 00:30:04.375 }, 00:30:04.375 "claimed": false, 00:30:04.375 "zoned": false, 00:30:04.375 "supported_io_types": { 00:30:04.375 "read": true, 00:30:04.375 "write": true, 00:30:04.375 "unmap": false, 00:30:04.375 "flush": true, 00:30:04.375 "reset": true, 00:30:04.375 "nvme_admin": true, 00:30:04.375 "nvme_io": true, 00:30:04.375 "nvme_io_md": false, 00:30:04.375 "write_zeroes": true, 00:30:04.375 "zcopy": false, 00:30:04.375 "get_zone_info": false, 00:30:04.375 "zone_management": false, 00:30:04.375 "zone_append": false, 00:30:04.375 "compare": true, 00:30:04.375 "compare_and_write": true, 00:30:04.375 "abort": true, 00:30:04.375 "seek_hole": false, 00:30:04.375 "seek_data": false, 00:30:04.375 "copy": true, 00:30:04.375 "nvme_iov_md": false 00:30:04.375 }, 00:30:04.375 "memory_domains": [ 00:30:04.375 { 00:30:04.375 "dma_device_id": "system", 00:30:04.375 "dma_device_type": 1 00:30:04.375 } 00:30:04.375 ], 00:30:04.375 "driver_specific": { 00:30:04.375 "nvme": [ 00:30:04.375 { 00:30:04.375 "trid": { 00:30:04.375 "trtype": "TCP", 00:30:04.375 "adrfam": "IPv4", 00:30:04.375 "traddr": "10.0.0.2", 00:30:04.375 "trsvcid": "4420", 00:30:04.375 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:04.375 }, 00:30:04.375 "ctrlr_data": { 00:30:04.375 "cntlid": 1, 00:30:04.375 "vendor_id": "0x8086", 00:30:04.375 "model_number": "SPDK bdev Controller", 00:30:04.375 "serial_number": "00000000000000000000", 00:30:04.375 "firmware_revision": "25.01", 00:30:04.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.375 "oacs": { 00:30:04.375 "security": 0, 00:30:04.375 "format": 0, 00:30:04.375 "firmware": 0, 00:30:04.375 "ns_manage": 0 00:30:04.375 }, 00:30:04.375 "multi_ctrlr": true, 00:30:04.375 "ana_reporting": false 00:30:04.375 }, 00:30:04.375 "vs": { 00:30:04.375 "nvme_version": "1.3" 00:30:04.375 }, 00:30:04.375 "ns_data": { 00:30:04.375 "id": 1, 00:30:04.375 "can_share": true 00:30:04.375 } 00:30:04.375 } 00:30:04.375 ], 00:30:04.375 "mp_policy": "active_passive" 00:30:04.375 } 00:30:04.375 } 00:30:04.375 ] 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.375 [2024-12-15 05:30:17.798846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:30:04.375 [2024-12-15 05:30:17.798900] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xeb1230 (9): Bad file descriptor 00:30:04.375 [2024-12-15 05:30:17.931066] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.375 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.375 [ 00:30:04.375 { 00:30:04.375 "name": "nvme0n1", 00:30:04.375 "aliases": [ 00:30:04.375 "8cbdf24a-8a5d-4d3b-866e-3166b4711dc5" 00:30:04.375 ], 00:30:04.375 "product_name": "NVMe disk", 00:30:04.375 "block_size": 512, 00:30:04.375 "num_blocks": 2097152, 00:30:04.375 "uuid": "8cbdf24a-8a5d-4d3b-866e-3166b4711dc5", 00:30:04.375 "numa_id": 1, 00:30:04.375 "assigned_rate_limits": { 00:30:04.375 "rw_ios_per_sec": 0, 00:30:04.375 "rw_mbytes_per_sec": 0, 00:30:04.375 "r_mbytes_per_sec": 0, 00:30:04.375 "w_mbytes_per_sec": 0 00:30:04.375 }, 00:30:04.375 "claimed": false, 00:30:04.375 "zoned": false, 00:30:04.375 "supported_io_types": { 00:30:04.375 "read": true, 00:30:04.375 "write": true, 00:30:04.375 "unmap": false, 00:30:04.375 "flush": true, 00:30:04.375 "reset": true, 00:30:04.375 "nvme_admin": true, 00:30:04.375 "nvme_io": true, 00:30:04.375 "nvme_io_md": false, 00:30:04.375 "write_zeroes": true, 00:30:04.375 "zcopy": false, 00:30:04.375 "get_zone_info": false, 00:30:04.375 "zone_management": false, 00:30:04.375 "zone_append": false, 00:30:04.375 "compare": true, 00:30:04.375 "compare_and_write": true, 00:30:04.375 "abort": true, 00:30:04.375 "seek_hole": false, 00:30:04.375 "seek_data": false, 00:30:04.375 "copy": true, 00:30:04.375 "nvme_iov_md": false 00:30:04.375 }, 00:30:04.375 "memory_domains": [ 00:30:04.375 { 00:30:04.375 "dma_device_id": "system", 00:30:04.375 "dma_device_type": 1 00:30:04.375 } 00:30:04.375 ], 00:30:04.375 "driver_specific": { 00:30:04.375 "nvme": [ 00:30:04.375 { 00:30:04.375 "trid": { 00:30:04.375 "trtype": "TCP", 00:30:04.375 "adrfam": "IPv4", 00:30:04.375 "traddr": "10.0.0.2", 00:30:04.375 "trsvcid": "4420", 00:30:04.375 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:04.375 }, 00:30:04.375 "ctrlr_data": { 00:30:04.375 "cntlid": 2, 00:30:04.375 "vendor_id": "0x8086", 00:30:04.375 "model_number": "SPDK bdev Controller", 00:30:04.375 "serial_number": "00000000000000000000", 00:30:04.375 "firmware_revision": "25.01", 00:30:04.375 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.375 "oacs": { 00:30:04.375 "security": 0, 00:30:04.375 "format": 0, 00:30:04.375 "firmware": 0, 00:30:04.375 "ns_manage": 0 00:30:04.375 }, 00:30:04.375 "multi_ctrlr": true, 00:30:04.375 "ana_reporting": false 00:30:04.375 }, 00:30:04.375 "vs": { 00:30:04.375 "nvme_version": "1.3" 00:30:04.375 }, 00:30:04.375 "ns_data": { 00:30:04.376 "id": 1, 00:30:04.376 "can_share": true 00:30:04.376 } 00:30:04.376 } 00:30:04.376 ], 00:30:04.376 "mp_policy": "active_passive" 00:30:04.376 } 00:30:04.376 } 00:30:04.376 ] 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.sjLtJnLP2j 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.sjLtJnLP2j 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.sjLtJnLP2j 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.376 05:30:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.376 [2024-12-15 05:30:18.003453] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:04.376 [2024-12-15 05:30:18.003541] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:04.376 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.376 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:30:04.376 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.376 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.376 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.376 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:30:04.376 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.376 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.376 [2024-12-15 05:30:18.019509] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:04.635 nvme0n1 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.635 [ 00:30:04.635 { 00:30:04.635 "name": "nvme0n1", 00:30:04.635 "aliases": [ 00:30:04.635 "8cbdf24a-8a5d-4d3b-866e-3166b4711dc5" 00:30:04.635 ], 00:30:04.635 "product_name": "NVMe disk", 00:30:04.635 "block_size": 512, 00:30:04.635 "num_blocks": 2097152, 00:30:04.635 "uuid": "8cbdf24a-8a5d-4d3b-866e-3166b4711dc5", 00:30:04.635 "numa_id": 1, 00:30:04.635 "assigned_rate_limits": { 00:30:04.635 "rw_ios_per_sec": 0, 00:30:04.635 "rw_mbytes_per_sec": 0, 00:30:04.635 "r_mbytes_per_sec": 0, 00:30:04.635 "w_mbytes_per_sec": 0 00:30:04.635 }, 00:30:04.635 "claimed": false, 00:30:04.635 "zoned": false, 00:30:04.635 "supported_io_types": { 00:30:04.635 "read": true, 00:30:04.635 "write": true, 00:30:04.635 "unmap": false, 00:30:04.635 "flush": true, 00:30:04.635 "reset": true, 00:30:04.635 "nvme_admin": true, 00:30:04.635 "nvme_io": true, 00:30:04.635 "nvme_io_md": false, 00:30:04.635 "write_zeroes": true, 00:30:04.635 "zcopy": false, 00:30:04.635 "get_zone_info": false, 00:30:04.635 "zone_management": false, 00:30:04.635 "zone_append": false, 00:30:04.635 "compare": true, 00:30:04.635 "compare_and_write": true, 00:30:04.635 "abort": true, 00:30:04.635 "seek_hole": false, 00:30:04.635 "seek_data": false, 00:30:04.635 "copy": true, 00:30:04.635 "nvme_iov_md": false 00:30:04.635 }, 00:30:04.635 "memory_domains": [ 00:30:04.635 { 00:30:04.635 "dma_device_id": "system", 00:30:04.635 "dma_device_type": 1 00:30:04.635 } 00:30:04.635 ], 00:30:04.635 "driver_specific": { 00:30:04.635 "nvme": [ 00:30:04.635 { 00:30:04.635 "trid": { 00:30:04.635 "trtype": "TCP", 00:30:04.635 "adrfam": "IPv4", 00:30:04.635 "traddr": "10.0.0.2", 00:30:04.635 "trsvcid": "4421", 00:30:04.635 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:04.635 }, 00:30:04.635 "ctrlr_data": { 00:30:04.635 "cntlid": 3, 00:30:04.635 "vendor_id": "0x8086", 00:30:04.635 "model_number": "SPDK bdev Controller", 00:30:04.635 "serial_number": "00000000000000000000", 00:30:04.635 "firmware_revision": "25.01", 00:30:04.635 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:04.635 "oacs": { 00:30:04.635 "security": 0, 00:30:04.635 "format": 0, 00:30:04.635 "firmware": 0, 00:30:04.635 "ns_manage": 0 00:30:04.635 }, 00:30:04.635 "multi_ctrlr": true, 00:30:04.635 "ana_reporting": false 00:30:04.635 }, 00:30:04.635 "vs": { 00:30:04.635 "nvme_version": "1.3" 00:30:04.635 }, 00:30:04.635 "ns_data": { 00:30:04.635 "id": 1, 00:30:04.635 "can_share": true 00:30:04.635 } 00:30:04.635 } 00:30:04.635 ], 00:30:04.635 "mp_policy": "active_passive" 00:30:04.635 } 00:30:04.635 } 00:30:04.635 ] 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.sjLtJnLP2j 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # sync 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set +e 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:04.635 rmmod nvme_tcp 00:30:04.635 rmmod nvme_fabrics 00:30:04.635 rmmod nvme_keyring 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@128 -- # set -e 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # return 0 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@517 -- # '[' -n 446357 ']' 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@518 -- # killprocess 446357 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 446357 ']' 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 446357 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 446357 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 446357' 00:30:04.635 killing process with pid 446357 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 446357 00:30:04.635 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 446357 00:30:04.894 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:04.894 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:04.894 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:04.894 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # iptr 00:30:04.894 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-save 00:30:04.894 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:04.894 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@791 -- # iptables-restore 00:30:04.894 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:04.894 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:04.894 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:04.894 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:04.894 05:30:18 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:06.798 05:30:20 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:06.798 00:30:06.798 real 0m9.414s 00:30:06.798 user 0m2.965s 00:30:06.798 sys 0m4.834s 00:30:06.798 05:30:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.798 05:30:20 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:30:06.798 ************************************ 00:30:06.798 END TEST nvmf_async_init 00:30:06.798 ************************************ 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.058 ************************************ 00:30:07.058 START TEST dma 00:30:07.058 ************************************ 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:30:07.058 * Looking for test storage... 00:30:07.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lcov --version 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # IFS=.-: 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@336 -- # read -ra ver1 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # IFS=.-: 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@337 -- # read -ra ver2 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@338 -- # local 'op=<' 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@340 -- # ver1_l=2 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@341 -- # ver2_l=1 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@344 -- # case "$op" in 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@345 -- # : 1 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # decimal 1 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=1 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 1 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@365 -- # ver1[v]=1 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # decimal 2 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@353 -- # local d=2 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@355 -- # echo 2 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@366 -- # ver2[v]=2 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@368 -- # return 0 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:07.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.058 --rc genhtml_branch_coverage=1 00:30:07.058 --rc genhtml_function_coverage=1 00:30:07.058 --rc genhtml_legend=1 00:30:07.058 --rc geninfo_all_blocks=1 00:30:07.058 --rc geninfo_unexecuted_blocks=1 00:30:07.058 00:30:07.058 ' 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:07.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.058 --rc genhtml_branch_coverage=1 00:30:07.058 --rc genhtml_function_coverage=1 00:30:07.058 --rc genhtml_legend=1 00:30:07.058 --rc geninfo_all_blocks=1 00:30:07.058 --rc geninfo_unexecuted_blocks=1 00:30:07.058 00:30:07.058 ' 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:07.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.058 --rc genhtml_branch_coverage=1 00:30:07.058 --rc genhtml_function_coverage=1 00:30:07.058 --rc genhtml_legend=1 00:30:07.058 --rc geninfo_all_blocks=1 00:30:07.058 --rc geninfo_unexecuted_blocks=1 00:30:07.058 00:30:07.058 ' 00:30:07.058 05:30:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:07.058 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.058 --rc genhtml_branch_coverage=1 00:30:07.058 --rc genhtml_function_coverage=1 00:30:07.058 --rc genhtml_legend=1 00:30:07.058 --rc geninfo_all_blocks=1 00:30:07.058 --rc geninfo_unexecuted_blocks=1 00:30:07.058 00:30:07.059 ' 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@15 -- # shopt -s extglob 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # : 0 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:07.059 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:07.059 05:30:20 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:30:07.318 00:30:07.318 real 0m0.214s 00:30:07.318 user 0m0.133s 00:30:07.318 sys 0m0.095s 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:30:07.318 ************************************ 00:30:07.318 END TEST dma 00:30:07.318 ************************************ 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:07.318 ************************************ 00:30:07.318 START TEST nvmf_identify 00:30:07.318 ************************************ 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:30:07.318 * Looking for test storage... 00:30:07.318 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lcov --version 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:07.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.318 --rc genhtml_branch_coverage=1 00:30:07.318 --rc genhtml_function_coverage=1 00:30:07.318 --rc genhtml_legend=1 00:30:07.318 --rc geninfo_all_blocks=1 00:30:07.318 --rc geninfo_unexecuted_blocks=1 00:30:07.318 00:30:07.318 ' 00:30:07.318 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:07.318 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.318 --rc genhtml_branch_coverage=1 00:30:07.319 --rc genhtml_function_coverage=1 00:30:07.319 --rc genhtml_legend=1 00:30:07.319 --rc geninfo_all_blocks=1 00:30:07.319 --rc geninfo_unexecuted_blocks=1 00:30:07.319 00:30:07.319 ' 00:30:07.319 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:07.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.319 --rc genhtml_branch_coverage=1 00:30:07.319 --rc genhtml_function_coverage=1 00:30:07.319 --rc genhtml_legend=1 00:30:07.319 --rc geninfo_all_blocks=1 00:30:07.319 --rc geninfo_unexecuted_blocks=1 00:30:07.319 00:30:07.319 ' 00:30:07.319 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:07.319 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:07.319 --rc genhtml_branch_coverage=1 00:30:07.319 --rc genhtml_function_coverage=1 00:30:07.319 --rc genhtml_legend=1 00:30:07.319 --rc geninfo_all_blocks=1 00:30:07.319 --rc geninfo_unexecuted_blocks=1 00:30:07.319 00:30:07.319 ' 00:30:07.319 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:07.319 05:30:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:30:07.319 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:07.319 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:07.319 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:07.319 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:07.319 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:07.319 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:07.319 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:07.319 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:07.319 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.578 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # : 0 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:07.579 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # xtrace_disable 00:30:07.579 05:30:21 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # pci_devs=() 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # net_devs=() 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # e810=() 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # local -ga e810 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # x722=() 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # local -ga x722 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # mlx=() 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@322 -- # local -ga mlx 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:14.155 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:14.156 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:14.156 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:14.156 Found net devices under 0000:af:00.0: cvl_0_0 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:14.156 Found net devices under 0000:af:00.1: cvl_0_1 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@442 -- # is_hw=yes 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:14.156 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:14.156 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.384 ms 00:30:14.156 00:30:14.156 --- 10.0.0.2 ping statistics --- 00:30:14.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.156 rtt min/avg/max/mdev = 0.384/0.384/0.384/0.000 ms 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:14.156 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:14.156 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.199 ms 00:30:14.156 00:30:14.156 --- 10.0.0.1 ping statistics --- 00:30:14.156 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:14.156 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # return 0 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=450028 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 450028 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 450028 ']' 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:14.156 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:14.157 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:14.157 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:14.157 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:14.157 05:30:26 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:14.157 [2024-12-15 05:30:26.952903] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:14.157 [2024-12-15 05:30:26.952950] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:14.157 [2024-12-15 05:30:27.032802] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:14.157 [2024-12-15 05:30:27.056962] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:14.157 [2024-12-15 05:30:27.057002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:14.157 [2024-12-15 05:30:27.057010] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:14.157 [2024-12-15 05:30:27.057017] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:14.157 [2024-12-15 05:30:27.057022] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:14.157 [2024-12-15 05:30:27.058310] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:14.157 [2024-12-15 05:30:27.058421] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:14.157 [2024-12-15 05:30:27.058507] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:14.157 [2024-12-15 05:30:27.058508] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:14.157 [2024-12-15 05:30:27.151392] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:14.157 Malloc0 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:14.157 [2024-12-15 05:30:27.254748] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:14.157 [ 00:30:14.157 { 00:30:14.157 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:30:14.157 "subtype": "Discovery", 00:30:14.157 "listen_addresses": [ 00:30:14.157 { 00:30:14.157 "trtype": "TCP", 00:30:14.157 "adrfam": "IPv4", 00:30:14.157 "traddr": "10.0.0.2", 00:30:14.157 "trsvcid": "4420" 00:30:14.157 } 00:30:14.157 ], 00:30:14.157 "allow_any_host": true, 00:30:14.157 "hosts": [] 00:30:14.157 }, 00:30:14.157 { 00:30:14.157 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:30:14.157 "subtype": "NVMe", 00:30:14.157 "listen_addresses": [ 00:30:14.157 { 00:30:14.157 "trtype": "TCP", 00:30:14.157 "adrfam": "IPv4", 00:30:14.157 "traddr": "10.0.0.2", 00:30:14.157 "trsvcid": "4420" 00:30:14.157 } 00:30:14.157 ], 00:30:14.157 "allow_any_host": true, 00:30:14.157 "hosts": [], 00:30:14.157 "serial_number": "SPDK00000000000001", 00:30:14.157 "model_number": "SPDK bdev Controller", 00:30:14.157 "max_namespaces": 32, 00:30:14.157 "min_cntlid": 1, 00:30:14.157 "max_cntlid": 65519, 00:30:14.157 "namespaces": [ 00:30:14.157 { 00:30:14.157 "nsid": 1, 00:30:14.157 "bdev_name": "Malloc0", 00:30:14.157 "name": "Malloc0", 00:30:14.157 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:30:14.157 "eui64": "ABCDEF0123456789", 00:30:14.157 "uuid": "ed9ec3bb-51b5-4281-9c8f-9069e8722fca" 00:30:14.157 } 00:30:14.157 ] 00:30:14.157 } 00:30:14.157 ] 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.157 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:30:14.157 [2024-12-15 05:30:27.305035] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:14.157 [2024-12-15 05:30:27.305073] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450166 ] 00:30:14.157 [2024-12-15 05:30:27.341658] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:30:14.157 [2024-12-15 05:30:27.341698] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:14.157 [2024-12-15 05:30:27.341703] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:14.157 [2024-12-15 05:30:27.341712] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:14.157 [2024-12-15 05:30:27.341719] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:14.157 [2024-12-15 05:30:27.349211] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:30:14.157 [2024-12-15 05:30:27.349244] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x22aede0 0 00:30:14.157 [2024-12-15 05:30:27.349339] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:14.157 [2024-12-15 05:30:27.349348] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:14.157 [2024-12-15 05:30:27.349351] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:14.157 [2024-12-15 05:30:27.349355] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:14.157 [2024-12-15 05:30:27.349379] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.157 [2024-12-15 05:30:27.349384] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.157 [2024-12-15 05:30:27.349387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aede0) 00:30:14.157 [2024-12-15 05:30:27.349398] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:14.157 [2024-12-15 05:30:27.349410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2309f40, cid 0, qid 0 00:30:14.157 [2024-12-15 05:30:27.357004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.157 [2024-12-15 05:30:27.357013] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.157 [2024-12-15 05:30:27.357016] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.157 [2024-12-15 05:30:27.357020] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2309f40) on tqpair=0x22aede0 00:30:14.157 [2024-12-15 05:30:27.357030] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:14.157 [2024-12-15 05:30:27.357036] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:30:14.157 [2024-12-15 05:30:27.357041] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:30:14.157 [2024-12-15 05:30:27.357052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.157 [2024-12-15 05:30:27.357055] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.157 [2024-12-15 05:30:27.357058] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aede0) 00:30:14.157 [2024-12-15 05:30:27.357065] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.158 [2024-12-15 05:30:27.357078] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2309f40, cid 0, qid 0 00:30:14.158 [2024-12-15 05:30:27.357157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.158 [2024-12-15 05:30:27.357163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.158 [2024-12-15 05:30:27.357166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2309f40) on tqpair=0x22aede0 00:30:14.158 [2024-12-15 05:30:27.357175] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:30:14.158 [2024-12-15 05:30:27.357181] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:30:14.158 [2024-12-15 05:30:27.357187] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357190] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357193] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aede0) 00:30:14.158 [2024-12-15 05:30:27.357199] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.158 [2024-12-15 05:30:27.357209] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2309f40, cid 0, qid 0 00:30:14.158 [2024-12-15 05:30:27.357267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.158 [2024-12-15 05:30:27.357273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.158 [2024-12-15 05:30:27.357275] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357279] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2309f40) on tqpair=0x22aede0 00:30:14.158 [2024-12-15 05:30:27.357284] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:30:14.158 [2024-12-15 05:30:27.357291] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:14.158 [2024-12-15 05:30:27.357296] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357300] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357303] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aede0) 00:30:14.158 [2024-12-15 05:30:27.357308] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.158 [2024-12-15 05:30:27.357317] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2309f40, cid 0, qid 0 00:30:14.158 [2024-12-15 05:30:27.357379] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.158 [2024-12-15 05:30:27.357384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.158 [2024-12-15 05:30:27.357387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357390] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2309f40) on tqpair=0x22aede0 00:30:14.158 [2024-12-15 05:30:27.357394] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:14.158 [2024-12-15 05:30:27.357403] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357406] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357409] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aede0) 00:30:14.158 [2024-12-15 05:30:27.357415] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.158 [2024-12-15 05:30:27.357424] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2309f40, cid 0, qid 0 00:30:14.158 [2024-12-15 05:30:27.357489] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.158 [2024-12-15 05:30:27.357495] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.158 [2024-12-15 05:30:27.357499] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357503] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2309f40) on tqpair=0x22aede0 00:30:14.158 [2024-12-15 05:30:27.357507] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:14.158 [2024-12-15 05:30:27.357511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:14.158 [2024-12-15 05:30:27.357518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:14.158 [2024-12-15 05:30:27.357625] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:30:14.158 [2024-12-15 05:30:27.357629] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:14.158 [2024-12-15 05:30:27.357636] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357640] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357643] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aede0) 00:30:14.158 [2024-12-15 05:30:27.357648] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.158 [2024-12-15 05:30:27.357658] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2309f40, cid 0, qid 0 00:30:14.158 [2024-12-15 05:30:27.357719] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.158 [2024-12-15 05:30:27.357725] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.158 [2024-12-15 05:30:27.357728] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357731] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2309f40) on tqpair=0x22aede0 00:30:14.158 [2024-12-15 05:30:27.357735] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:14.158 [2024-12-15 05:30:27.357743] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357746] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357750] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aede0) 00:30:14.158 [2024-12-15 05:30:27.357755] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.158 [2024-12-15 05:30:27.357764] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2309f40, cid 0, qid 0 00:30:14.158 [2024-12-15 05:30:27.357830] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.158 [2024-12-15 05:30:27.357835] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.158 [2024-12-15 05:30:27.357838] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357841] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2309f40) on tqpair=0x22aede0 00:30:14.158 [2024-12-15 05:30:27.357845] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:14.158 [2024-12-15 05:30:27.357850] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:14.158 [2024-12-15 05:30:27.357856] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:30:14.158 [2024-12-15 05:30:27.357863] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:14.158 [2024-12-15 05:30:27.357871] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357875] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aede0) 00:30:14.158 [2024-12-15 05:30:27.357881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.158 [2024-12-15 05:30:27.357891] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2309f40, cid 0, qid 0 00:30:14.158 [2024-12-15 05:30:27.357978] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:14.158 [2024-12-15 05:30:27.357984] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:14.158 [2024-12-15 05:30:27.357987] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.357991] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22aede0): datao=0, datal=4096, cccid=0 00:30:14.158 [2024-12-15 05:30:27.358000] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2309f40) on tqpair(0x22aede0): expected_datao=0, payload_size=4096 00:30:14.158 [2024-12-15 05:30:27.358005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.358011] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.358015] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.358037] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.158 [2024-12-15 05:30:27.358043] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.158 [2024-12-15 05:30:27.358046] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.358049] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2309f40) on tqpair=0x22aede0 00:30:14.158 [2024-12-15 05:30:27.358055] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:30:14.158 [2024-12-15 05:30:27.358059] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:30:14.158 [2024-12-15 05:30:27.358063] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:30:14.158 [2024-12-15 05:30:27.358067] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:30:14.158 [2024-12-15 05:30:27.358072] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:30:14.158 [2024-12-15 05:30:27.358076] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:30:14.158 [2024-12-15 05:30:27.358086] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:14.158 [2024-12-15 05:30:27.358093] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.358097] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.358100] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aede0) 00:30:14.158 [2024-12-15 05:30:27.358107] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:14.158 [2024-12-15 05:30:27.358116] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2309f40, cid 0, qid 0 00:30:14.158 [2024-12-15 05:30:27.358181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.158 [2024-12-15 05:30:27.358187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.158 [2024-12-15 05:30:27.358190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.358193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2309f40) on tqpair=0x22aede0 00:30:14.158 [2024-12-15 05:30:27.358199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.358203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.158 [2024-12-15 05:30:27.358206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x22aede0) 00:30:14.159 [2024-12-15 05:30:27.358213] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.159 [2024-12-15 05:30:27.358218] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.358221] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.358224] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x22aede0) 00:30:14.159 [2024-12-15 05:30:27.358229] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.159 [2024-12-15 05:30:27.358234] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.358238] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.358240] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x22aede0) 00:30:14.159 [2024-12-15 05:30:27.358245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.159 [2024-12-15 05:30:27.358250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.358253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.358256] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.159 [2024-12-15 05:30:27.358261] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.159 [2024-12-15 05:30:27.358265] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:14.159 [2024-12-15 05:30:27.358277] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:14.159 [2024-12-15 05:30:27.358283] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.358286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22aede0) 00:30:14.159 [2024-12-15 05:30:27.358292] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.159 [2024-12-15 05:30:27.358303] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2309f40, cid 0, qid 0 00:30:14.159 [2024-12-15 05:30:27.358307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a0c0, cid 1, qid 0 00:30:14.159 [2024-12-15 05:30:27.358311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a240, cid 2, qid 0 00:30:14.159 [2024-12-15 05:30:27.358315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.159 [2024-12-15 05:30:27.358319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a540, cid 4, qid 0 00:30:14.159 [2024-12-15 05:30:27.358410] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.159 [2024-12-15 05:30:27.358416] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.159 [2024-12-15 05:30:27.358419] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.358423] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a540) on tqpair=0x22aede0 00:30:14.159 [2024-12-15 05:30:27.358427] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:30:14.159 [2024-12-15 05:30:27.358432] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:30:14.159 [2024-12-15 05:30:27.358440] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.358443] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22aede0) 00:30:14.159 [2024-12-15 05:30:27.358449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.159 [2024-12-15 05:30:27.358461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a540, cid 4, qid 0 00:30:14.159 [2024-12-15 05:30:27.358533] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:14.159 [2024-12-15 05:30:27.358539] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:14.159 [2024-12-15 05:30:27.358542] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.358546] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22aede0): datao=0, datal=4096, cccid=4 00:30:14.159 [2024-12-15 05:30:27.358549] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230a540) on tqpair(0x22aede0): expected_datao=0, payload_size=4096 00:30:14.159 [2024-12-15 05:30:27.358553] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.358562] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.358566] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.399066] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.159 [2024-12-15 05:30:27.399077] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.159 [2024-12-15 05:30:27.399081] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.399084] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a540) on tqpair=0x22aede0 00:30:14.159 [2024-12-15 05:30:27.399096] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:30:14.159 [2024-12-15 05:30:27.399119] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.399123] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22aede0) 00:30:14.159 [2024-12-15 05:30:27.399130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.159 [2024-12-15 05:30:27.399136] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.399139] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.399143] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x22aede0) 00:30:14.159 [2024-12-15 05:30:27.399148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.159 [2024-12-15 05:30:27.399163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a540, cid 4, qid 0 00:30:14.159 [2024-12-15 05:30:27.399168] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a6c0, cid 5, qid 0 00:30:14.159 [2024-12-15 05:30:27.399269] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:14.159 [2024-12-15 05:30:27.399275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:14.159 [2024-12-15 05:30:27.399278] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.399281] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22aede0): datao=0, datal=1024, cccid=4 00:30:14.159 [2024-12-15 05:30:27.399285] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230a540) on tqpair(0x22aede0): expected_datao=0, payload_size=1024 00:30:14.159 [2024-12-15 05:30:27.399289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.399295] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.399298] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.399303] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.159 [2024-12-15 05:30:27.399307] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.159 [2024-12-15 05:30:27.399311] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.399314] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a6c0) on tqpair=0x22aede0 00:30:14.159 [2024-12-15 05:30:27.445002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.159 [2024-12-15 05:30:27.445015] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.159 [2024-12-15 05:30:27.445018] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.445021] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a540) on tqpair=0x22aede0 00:30:14.159 [2024-12-15 05:30:27.445032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.445035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22aede0) 00:30:14.159 [2024-12-15 05:30:27.445041] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.159 [2024-12-15 05:30:27.445057] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a540, cid 4, qid 0 00:30:14.159 [2024-12-15 05:30:27.445137] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:14.159 [2024-12-15 05:30:27.445143] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:14.159 [2024-12-15 05:30:27.445146] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.445149] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22aede0): datao=0, datal=3072, cccid=4 00:30:14.159 [2024-12-15 05:30:27.445153] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230a540) on tqpair(0x22aede0): expected_datao=0, payload_size=3072 00:30:14.159 [2024-12-15 05:30:27.445157] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.445163] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.445166] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.445210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.159 [2024-12-15 05:30:27.445216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.159 [2024-12-15 05:30:27.445219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.445222] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a540) on tqpair=0x22aede0 00:30:14.159 [2024-12-15 05:30:27.445229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.445232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x22aede0) 00:30:14.159 [2024-12-15 05:30:27.445238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.159 [2024-12-15 05:30:27.445249] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a540, cid 4, qid 0 00:30:14.159 [2024-12-15 05:30:27.445330] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:14.159 [2024-12-15 05:30:27.445335] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:14.159 [2024-12-15 05:30:27.445338] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.445342] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x22aede0): datao=0, datal=8, cccid=4 00:30:14.159 [2024-12-15 05:30:27.445345] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x230a540) on tqpair(0x22aede0): expected_datao=0, payload_size=8 00:30:14.159 [2024-12-15 05:30:27.445349] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.445354] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.445358] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.487159] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.159 [2024-12-15 05:30:27.487171] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.159 [2024-12-15 05:30:27.487174] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.159 [2024-12-15 05:30:27.487178] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a540) on tqpair=0x22aede0 00:30:14.159 ===================================================== 00:30:14.159 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:30:14.159 ===================================================== 00:30:14.160 Controller Capabilities/Features 00:30:14.160 ================================ 00:30:14.160 Vendor ID: 0000 00:30:14.160 Subsystem Vendor ID: 0000 00:30:14.160 Serial Number: .................... 00:30:14.160 Model Number: ........................................ 00:30:14.160 Firmware Version: 25.01 00:30:14.160 Recommended Arb Burst: 0 00:30:14.160 IEEE OUI Identifier: 00 00 00 00:30:14.160 Multi-path I/O 00:30:14.160 May have multiple subsystem ports: No 00:30:14.160 May have multiple controllers: No 00:30:14.160 Associated with SR-IOV VF: No 00:30:14.160 Max Data Transfer Size: 131072 00:30:14.160 Max Number of Namespaces: 0 00:30:14.160 Max Number of I/O Queues: 1024 00:30:14.160 NVMe Specification Version (VS): 1.3 00:30:14.160 NVMe Specification Version (Identify): 1.3 00:30:14.160 Maximum Queue Entries: 128 00:30:14.160 Contiguous Queues Required: Yes 00:30:14.160 Arbitration Mechanisms Supported 00:30:14.160 Weighted Round Robin: Not Supported 00:30:14.160 Vendor Specific: Not Supported 00:30:14.160 Reset Timeout: 15000 ms 00:30:14.160 Doorbell Stride: 4 bytes 00:30:14.160 NVM Subsystem Reset: Not Supported 00:30:14.160 Command Sets Supported 00:30:14.160 NVM Command Set: Supported 00:30:14.160 Boot Partition: Not Supported 00:30:14.160 Memory Page Size Minimum: 4096 bytes 00:30:14.160 Memory Page Size Maximum: 4096 bytes 00:30:14.160 Persistent Memory Region: Not Supported 00:30:14.160 Optional Asynchronous Events Supported 00:30:14.160 Namespace Attribute Notices: Not Supported 00:30:14.160 Firmware Activation Notices: Not Supported 00:30:14.160 ANA Change Notices: Not Supported 00:30:14.160 PLE Aggregate Log Change Notices: Not Supported 00:30:14.160 LBA Status Info Alert Notices: Not Supported 00:30:14.160 EGE Aggregate Log Change Notices: Not Supported 00:30:14.160 Normal NVM Subsystem Shutdown event: Not Supported 00:30:14.160 Zone Descriptor Change Notices: Not Supported 00:30:14.160 Discovery Log Change Notices: Supported 00:30:14.160 Controller Attributes 00:30:14.160 128-bit Host Identifier: Not Supported 00:30:14.160 Non-Operational Permissive Mode: Not Supported 00:30:14.160 NVM Sets: Not Supported 00:30:14.160 Read Recovery Levels: Not Supported 00:30:14.160 Endurance Groups: Not Supported 00:30:14.160 Predictable Latency Mode: Not Supported 00:30:14.160 Traffic Based Keep ALive: Not Supported 00:30:14.160 Namespace Granularity: Not Supported 00:30:14.160 SQ Associations: Not Supported 00:30:14.160 UUID List: Not Supported 00:30:14.160 Multi-Domain Subsystem: Not Supported 00:30:14.160 Fixed Capacity Management: Not Supported 00:30:14.160 Variable Capacity Management: Not Supported 00:30:14.160 Delete Endurance Group: Not Supported 00:30:14.160 Delete NVM Set: Not Supported 00:30:14.160 Extended LBA Formats Supported: Not Supported 00:30:14.160 Flexible Data Placement Supported: Not Supported 00:30:14.160 00:30:14.160 Controller Memory Buffer Support 00:30:14.160 ================================ 00:30:14.160 Supported: No 00:30:14.160 00:30:14.160 Persistent Memory Region Support 00:30:14.160 ================================ 00:30:14.160 Supported: No 00:30:14.160 00:30:14.160 Admin Command Set Attributes 00:30:14.160 ============================ 00:30:14.160 Security Send/Receive: Not Supported 00:30:14.160 Format NVM: Not Supported 00:30:14.160 Firmware Activate/Download: Not Supported 00:30:14.160 Namespace Management: Not Supported 00:30:14.160 Device Self-Test: Not Supported 00:30:14.160 Directives: Not Supported 00:30:14.160 NVMe-MI: Not Supported 00:30:14.160 Virtualization Management: Not Supported 00:30:14.160 Doorbell Buffer Config: Not Supported 00:30:14.160 Get LBA Status Capability: Not Supported 00:30:14.160 Command & Feature Lockdown Capability: Not Supported 00:30:14.160 Abort Command Limit: 1 00:30:14.160 Async Event Request Limit: 4 00:30:14.160 Number of Firmware Slots: N/A 00:30:14.160 Firmware Slot 1 Read-Only: N/A 00:30:14.160 Firmware Activation Without Reset: N/A 00:30:14.160 Multiple Update Detection Support: N/A 00:30:14.160 Firmware Update Granularity: No Information Provided 00:30:14.160 Per-Namespace SMART Log: No 00:30:14.160 Asymmetric Namespace Access Log Page: Not Supported 00:30:14.160 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:30:14.160 Command Effects Log Page: Not Supported 00:30:14.160 Get Log Page Extended Data: Supported 00:30:14.160 Telemetry Log Pages: Not Supported 00:30:14.160 Persistent Event Log Pages: Not Supported 00:30:14.160 Supported Log Pages Log Page: May Support 00:30:14.160 Commands Supported & Effects Log Page: Not Supported 00:30:14.160 Feature Identifiers & Effects Log Page:May Support 00:30:14.160 NVMe-MI Commands & Effects Log Page: May Support 00:30:14.160 Data Area 4 for Telemetry Log: Not Supported 00:30:14.160 Error Log Page Entries Supported: 128 00:30:14.160 Keep Alive: Not Supported 00:30:14.160 00:30:14.160 NVM Command Set Attributes 00:30:14.160 ========================== 00:30:14.160 Submission Queue Entry Size 00:30:14.160 Max: 1 00:30:14.160 Min: 1 00:30:14.160 Completion Queue Entry Size 00:30:14.160 Max: 1 00:30:14.160 Min: 1 00:30:14.160 Number of Namespaces: 0 00:30:14.160 Compare Command: Not Supported 00:30:14.160 Write Uncorrectable Command: Not Supported 00:30:14.160 Dataset Management Command: Not Supported 00:30:14.160 Write Zeroes Command: Not Supported 00:30:14.160 Set Features Save Field: Not Supported 00:30:14.160 Reservations: Not Supported 00:30:14.160 Timestamp: Not Supported 00:30:14.160 Copy: Not Supported 00:30:14.160 Volatile Write Cache: Not Present 00:30:14.160 Atomic Write Unit (Normal): 1 00:30:14.160 Atomic Write Unit (PFail): 1 00:30:14.160 Atomic Compare & Write Unit: 1 00:30:14.160 Fused Compare & Write: Supported 00:30:14.160 Scatter-Gather List 00:30:14.160 SGL Command Set: Supported 00:30:14.160 SGL Keyed: Supported 00:30:14.160 SGL Bit Bucket Descriptor: Not Supported 00:30:14.160 SGL Metadata Pointer: Not Supported 00:30:14.160 Oversized SGL: Not Supported 00:30:14.160 SGL Metadata Address: Not Supported 00:30:14.160 SGL Offset: Supported 00:30:14.160 Transport SGL Data Block: Not Supported 00:30:14.160 Replay Protected Memory Block: Not Supported 00:30:14.160 00:30:14.160 Firmware Slot Information 00:30:14.160 ========================= 00:30:14.160 Active slot: 0 00:30:14.160 00:30:14.160 00:30:14.160 Error Log 00:30:14.160 ========= 00:30:14.160 00:30:14.160 Active Namespaces 00:30:14.160 ================= 00:30:14.160 Discovery Log Page 00:30:14.160 ================== 00:30:14.160 Generation Counter: 2 00:30:14.160 Number of Records: 2 00:30:14.160 Record Format: 0 00:30:14.160 00:30:14.160 Discovery Log Entry 0 00:30:14.160 ---------------------- 00:30:14.160 Transport Type: 3 (TCP) 00:30:14.160 Address Family: 1 (IPv4) 00:30:14.160 Subsystem Type: 3 (Current Discovery Subsystem) 00:30:14.160 Entry Flags: 00:30:14.160 Duplicate Returned Information: 1 00:30:14.160 Explicit Persistent Connection Support for Discovery: 1 00:30:14.160 Transport Requirements: 00:30:14.160 Secure Channel: Not Required 00:30:14.160 Port ID: 0 (0x0000) 00:30:14.160 Controller ID: 65535 (0xffff) 00:30:14.160 Admin Max SQ Size: 128 00:30:14.160 Transport Service Identifier: 4420 00:30:14.160 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:30:14.160 Transport Address: 10.0.0.2 00:30:14.160 Discovery Log Entry 1 00:30:14.161 ---------------------- 00:30:14.161 Transport Type: 3 (TCP) 00:30:14.161 Address Family: 1 (IPv4) 00:30:14.161 Subsystem Type: 2 (NVM Subsystem) 00:30:14.161 Entry Flags: 00:30:14.161 Duplicate Returned Information: 0 00:30:14.161 Explicit Persistent Connection Support for Discovery: 0 00:30:14.161 Transport Requirements: 00:30:14.161 Secure Channel: Not Required 00:30:14.161 Port ID: 0 (0x0000) 00:30:14.161 Controller ID: 65535 (0xffff) 00:30:14.161 Admin Max SQ Size: 128 00:30:14.161 Transport Service Identifier: 4420 00:30:14.161 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:30:14.161 Transport Address: 10.0.0.2 [2024-12-15 05:30:27.487258] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:30:14.161 [2024-12-15 05:30:27.487268] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2309f40) on tqpair=0x22aede0 00:30:14.161 [2024-12-15 05:30:27.487278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.161 [2024-12-15 05:30:27.487283] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a0c0) on tqpair=0x22aede0 00:30:14.161 [2024-12-15 05:30:27.487287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.161 [2024-12-15 05:30:27.487291] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a240) on tqpair=0x22aede0 00:30:14.161 [2024-12-15 05:30:27.487295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.161 [2024-12-15 05:30:27.487299] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.161 [2024-12-15 05:30:27.487303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.161 [2024-12-15 05:30:27.487310] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.161 [2024-12-15 05:30:27.487324] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.161 [2024-12-15 05:30:27.487338] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.161 [2024-12-15 05:30:27.487398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.161 [2024-12-15 05:30:27.487404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.161 [2024-12-15 05:30:27.487407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.161 [2024-12-15 05:30:27.487417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.161 [2024-12-15 05:30:27.487429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.161 [2024-12-15 05:30:27.487441] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.161 [2024-12-15 05:30:27.487518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.161 [2024-12-15 05:30:27.487523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.161 [2024-12-15 05:30:27.487526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.161 [2024-12-15 05:30:27.487534] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:30:14.161 [2024-12-15 05:30:27.487538] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:30:14.161 [2024-12-15 05:30:27.487546] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.161 [2024-12-15 05:30:27.487558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.161 [2024-12-15 05:30:27.487569] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.161 [2024-12-15 05:30:27.487637] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.161 [2024-12-15 05:30:27.487643] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.161 [2024-12-15 05:30:27.487648] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487651] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.161 [2024-12-15 05:30:27.487659] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.161 [2024-12-15 05:30:27.487672] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.161 [2024-12-15 05:30:27.487681] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.161 [2024-12-15 05:30:27.487737] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.161 [2024-12-15 05:30:27.487743] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.161 [2024-12-15 05:30:27.487746] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487749] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.161 [2024-12-15 05:30:27.487757] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487760] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487763] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.161 [2024-12-15 05:30:27.487769] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.161 [2024-12-15 05:30:27.487778] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.161 [2024-12-15 05:30:27.487837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.161 [2024-12-15 05:30:27.487842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.161 [2024-12-15 05:30:27.487845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487848] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.161 [2024-12-15 05:30:27.487856] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487860] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487863] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.161 [2024-12-15 05:30:27.487868] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.161 [2024-12-15 05:30:27.487878] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.161 [2024-12-15 05:30:27.487937] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.161 [2024-12-15 05:30:27.487942] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.161 [2024-12-15 05:30:27.487945] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487948] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.161 [2024-12-15 05:30:27.487956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487959] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.487963] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.161 [2024-12-15 05:30:27.487968] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.161 [2024-12-15 05:30:27.487977] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.161 [2024-12-15 05:30:27.488054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.161 [2024-12-15 05:30:27.488061] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.161 [2024-12-15 05:30:27.488064] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.488068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.161 [2024-12-15 05:30:27.488076] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.488080] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.488083] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.161 [2024-12-15 05:30:27.488089] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.161 [2024-12-15 05:30:27.488098] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.161 [2024-12-15 05:30:27.488170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.161 [2024-12-15 05:30:27.488176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.161 [2024-12-15 05:30:27.488179] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.488182] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.161 [2024-12-15 05:30:27.488190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.488194] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.488197] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.161 [2024-12-15 05:30:27.488202] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.161 [2024-12-15 05:30:27.488211] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.161 [2024-12-15 05:30:27.488270] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.161 [2024-12-15 05:30:27.488275] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.161 [2024-12-15 05:30:27.488278] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.488281] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.161 [2024-12-15 05:30:27.488289] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.488293] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.161 [2024-12-15 05:30:27.488296] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.161 [2024-12-15 05:30:27.488301] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.161 [2024-12-15 05:30:27.488311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.161 [2024-12-15 05:30:27.488369] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.162 [2024-12-15 05:30:27.488374] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.162 [2024-12-15 05:30:27.488377] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488381] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.162 [2024-12-15 05:30:27.488388] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488392] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488395] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.162 [2024-12-15 05:30:27.488400] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.162 [2024-12-15 05:30:27.488410] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.162 [2024-12-15 05:30:27.488486] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.162 [2024-12-15 05:30:27.488491] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.162 [2024-12-15 05:30:27.488494] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488497] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.162 [2024-12-15 05:30:27.488507] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488511] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488514] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.162 [2024-12-15 05:30:27.488519] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.162 [2024-12-15 05:30:27.488528] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.162 [2024-12-15 05:30:27.488602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.162 [2024-12-15 05:30:27.488608] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.162 [2024-12-15 05:30:27.488611] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488614] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.162 [2024-12-15 05:30:27.488622] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.162 [2024-12-15 05:30:27.488634] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.162 [2024-12-15 05:30:27.488643] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.162 [2024-12-15 05:30:27.488711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.162 [2024-12-15 05:30:27.488717] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.162 [2024-12-15 05:30:27.488719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488723] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.162 [2024-12-15 05:30:27.488732] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488735] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488738] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.162 [2024-12-15 05:30:27.488744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.162 [2024-12-15 05:30:27.488753] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.162 [2024-12-15 05:30:27.488817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.162 [2024-12-15 05:30:27.488823] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.162 [2024-12-15 05:30:27.488826] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.162 [2024-12-15 05:30:27.488837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.162 [2024-12-15 05:30:27.488849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.162 [2024-12-15 05:30:27.488858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.162 [2024-12-15 05:30:27.488917] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.162 [2024-12-15 05:30:27.488923] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.162 [2024-12-15 05:30:27.488926] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488929] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.162 [2024-12-15 05:30:27.488938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488942] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.488945] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.162 [2024-12-15 05:30:27.488951] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.162 [2024-12-15 05:30:27.488960] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.162 [2024-12-15 05:30:27.493001] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.162 [2024-12-15 05:30:27.493009] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.162 [2024-12-15 05:30:27.493012] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.493015] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.162 [2024-12-15 05:30:27.493024] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.493027] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.493030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x22aede0) 00:30:14.162 [2024-12-15 05:30:27.493036] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.162 [2024-12-15 05:30:27.493046] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x230a3c0, cid 3, qid 0 00:30:14.162 [2024-12-15 05:30:27.493199] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.162 [2024-12-15 05:30:27.493205] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.162 [2024-12-15 05:30:27.493208] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.493211] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x230a3c0) on tqpair=0x22aede0 00:30:14.162 [2024-12-15 05:30:27.493218] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 5 milliseconds 00:30:14.162 00:30:14.162 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:30:14.162 [2024-12-15 05:30:27.528401] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:14.162 [2024-12-15 05:30:27.528442] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid450270 ] 00:30:14.162 [2024-12-15 05:30:27.565153] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:30:14.162 [2024-12-15 05:30:27.565194] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:30:14.162 [2024-12-15 05:30:27.565199] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:30:14.162 [2024-12-15 05:30:27.565209] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:30:14.162 [2024-12-15 05:30:27.565217] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:30:14.162 [2024-12-15 05:30:27.569148] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:30:14.162 [2024-12-15 05:30:27.569177] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1f19de0 0 00:30:14.162 [2024-12-15 05:30:27.580004] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:30:14.162 [2024-12-15 05:30:27.580017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:30:14.162 [2024-12-15 05:30:27.580024] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:30:14.162 [2024-12-15 05:30:27.580027] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:30:14.162 [2024-12-15 05:30:27.580052] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.580058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.580061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19de0) 00:30:14.162 [2024-12-15 05:30:27.580071] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:30:14.162 [2024-12-15 05:30:27.580088] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f74f40, cid 0, qid 0 00:30:14.162 [2024-12-15 05:30:27.588002] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.162 [2024-12-15 05:30:27.588011] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.162 [2024-12-15 05:30:27.588014] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.588018] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f74f40) on tqpair=0x1f19de0 00:30:14.162 [2024-12-15 05:30:27.588028] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:30:14.162 [2024-12-15 05:30:27.588035] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:30:14.162 [2024-12-15 05:30:27.588040] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:30:14.162 [2024-12-15 05:30:27.588050] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.588054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.588057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19de0) 00:30:14.162 [2024-12-15 05:30:27.588064] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.162 [2024-12-15 05:30:27.588077] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f74f40, cid 0, qid 0 00:30:14.162 [2024-12-15 05:30:27.588210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.162 [2024-12-15 05:30:27.588216] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.162 [2024-12-15 05:30:27.588219] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.162 [2024-12-15 05:30:27.588223] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f74f40) on tqpair=0x1f19de0 00:30:14.162 [2024-12-15 05:30:27.588227] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:30:14.162 [2024-12-15 05:30:27.588234] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:30:14.162 [2024-12-15 05:30:27.588240] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588244] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588247] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19de0) 00:30:14.163 [2024-12-15 05:30:27.588253] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.163 [2024-12-15 05:30:27.588263] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f74f40, cid 0, qid 0 00:30:14.163 [2024-12-15 05:30:27.588322] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.163 [2024-12-15 05:30:27.588328] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.163 [2024-12-15 05:30:27.588331] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588334] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f74f40) on tqpair=0x1f19de0 00:30:14.163 [2024-12-15 05:30:27.588339] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:30:14.163 [2024-12-15 05:30:27.588348] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:30:14.163 [2024-12-15 05:30:27.588354] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588358] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588361] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19de0) 00:30:14.163 [2024-12-15 05:30:27.588366] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.163 [2024-12-15 05:30:27.588377] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f74f40, cid 0, qid 0 00:30:14.163 [2024-12-15 05:30:27.588437] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.163 [2024-12-15 05:30:27.588443] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.163 [2024-12-15 05:30:27.588446] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588449] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f74f40) on tqpair=0x1f19de0 00:30:14.163 [2024-12-15 05:30:27.588453] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:30:14.163 [2024-12-15 05:30:27.588462] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588465] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588469] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19de0) 00:30:14.163 [2024-12-15 05:30:27.588474] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.163 [2024-12-15 05:30:27.588483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f74f40, cid 0, qid 0 00:30:14.163 [2024-12-15 05:30:27.588542] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.163 [2024-12-15 05:30:27.588548] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.163 [2024-12-15 05:30:27.588551] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588555] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f74f40) on tqpair=0x1f19de0 00:30:14.163 [2024-12-15 05:30:27.588558] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:30:14.163 [2024-12-15 05:30:27.588563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:30:14.163 [2024-12-15 05:30:27.588569] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:30:14.163 [2024-12-15 05:30:27.588677] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:30:14.163 [2024-12-15 05:30:27.588682] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:30:14.163 [2024-12-15 05:30:27.588689] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588692] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588695] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19de0) 00:30:14.163 [2024-12-15 05:30:27.588701] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.163 [2024-12-15 05:30:27.588711] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f74f40, cid 0, qid 0 00:30:14.163 [2024-12-15 05:30:27.588771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.163 [2024-12-15 05:30:27.588777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.163 [2024-12-15 05:30:27.588780] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f74f40) on tqpair=0x1f19de0 00:30:14.163 [2024-12-15 05:30:27.588790] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:30:14.163 [2024-12-15 05:30:27.588798] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588802] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588805] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19de0) 00:30:14.163 [2024-12-15 05:30:27.588811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.163 [2024-12-15 05:30:27.588820] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f74f40, cid 0, qid 0 00:30:14.163 [2024-12-15 05:30:27.588884] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.163 [2024-12-15 05:30:27.588889] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.163 [2024-12-15 05:30:27.588892] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588895] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f74f40) on tqpair=0x1f19de0 00:30:14.163 [2024-12-15 05:30:27.588899] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:30:14.163 [2024-12-15 05:30:27.588904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:30:14.163 [2024-12-15 05:30:27.588910] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:30:14.163 [2024-12-15 05:30:27.588918] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:30:14.163 [2024-12-15 05:30:27.588925] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.588928] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19de0) 00:30:14.163 [2024-12-15 05:30:27.588934] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.163 [2024-12-15 05:30:27.588944] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f74f40, cid 0, qid 0 00:30:14.163 [2024-12-15 05:30:27.589039] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:14.163 [2024-12-15 05:30:27.589046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:14.163 [2024-12-15 05:30:27.589049] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589052] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f19de0): datao=0, datal=4096, cccid=0 00:30:14.163 [2024-12-15 05:30:27.589056] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f74f40) on tqpair(0x1f19de0): expected_datao=0, payload_size=4096 00:30:14.163 [2024-12-15 05:30:27.589060] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589066] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589070] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589083] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.163 [2024-12-15 05:30:27.589088] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.163 [2024-12-15 05:30:27.589091] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589094] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f74f40) on tqpair=0x1f19de0 00:30:14.163 [2024-12-15 05:30:27.589100] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:30:14.163 [2024-12-15 05:30:27.589105] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:30:14.163 [2024-12-15 05:30:27.589109] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:30:14.163 [2024-12-15 05:30:27.589115] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:30:14.163 [2024-12-15 05:30:27.589119] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:30:14.163 [2024-12-15 05:30:27.589123] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:30:14.163 [2024-12-15 05:30:27.589134] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:30:14.163 [2024-12-15 05:30:27.589142] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589145] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589149] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19de0) 00:30:14.163 [2024-12-15 05:30:27.589155] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:14.163 [2024-12-15 05:30:27.589165] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f74f40, cid 0, qid 0 00:30:14.163 [2024-12-15 05:30:27.589229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.163 [2024-12-15 05:30:27.589235] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.163 [2024-12-15 05:30:27.589238] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f74f40) on tqpair=0x1f19de0 00:30:14.163 [2024-12-15 05:30:27.589246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589249] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589253] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1f19de0) 00:30:14.163 [2024-12-15 05:30:27.589258] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.163 [2024-12-15 05:30:27.589263] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589266] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1f19de0) 00:30:14.163 [2024-12-15 05:30:27.589274] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.163 [2024-12-15 05:30:27.589279] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589282] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.163 [2024-12-15 05:30:27.589286] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1f19de0) 00:30:14.163 [2024-12-15 05:30:27.589290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.164 [2024-12-15 05:30:27.589295] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589299] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589302] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f19de0) 00:30:14.164 [2024-12-15 05:30:27.589306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.164 [2024-12-15 05:30:27.589310] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.589321] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.589326] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589330] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f19de0) 00:30:14.164 [2024-12-15 05:30:27.589339] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.164 [2024-12-15 05:30:27.589349] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f74f40, cid 0, qid 0 00:30:14.164 [2024-12-15 05:30:27.589354] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f750c0, cid 1, qid 0 00:30:14.164 [2024-12-15 05:30:27.589358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f75240, cid 2, qid 0 00:30:14.164 [2024-12-15 05:30:27.589362] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f753c0, cid 3, qid 0 00:30:14.164 [2024-12-15 05:30:27.589366] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f75540, cid 4, qid 0 00:30:14.164 [2024-12-15 05:30:27.589461] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.164 [2024-12-15 05:30:27.589467] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.164 [2024-12-15 05:30:27.589470] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589473] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f75540) on tqpair=0x1f19de0 00:30:14.164 [2024-12-15 05:30:27.589477] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:30:14.164 [2024-12-15 05:30:27.589482] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.589491] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.589499] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.589504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589510] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f19de0) 00:30:14.164 [2024-12-15 05:30:27.589516] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:30:14.164 [2024-12-15 05:30:27.589526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f75540, cid 4, qid 0 00:30:14.164 [2024-12-15 05:30:27.589587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.164 [2024-12-15 05:30:27.589592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.164 [2024-12-15 05:30:27.589595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f75540) on tqpair=0x1f19de0 00:30:14.164 [2024-12-15 05:30:27.589647] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.589656] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.589663] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589666] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f19de0) 00:30:14.164 [2024-12-15 05:30:27.589672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.164 [2024-12-15 05:30:27.589682] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f75540, cid 4, qid 0 00:30:14.164 [2024-12-15 05:30:27.589755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:14.164 [2024-12-15 05:30:27.589760] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:14.164 [2024-12-15 05:30:27.589763] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589768] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f19de0): datao=0, datal=4096, cccid=4 00:30:14.164 [2024-12-15 05:30:27.589773] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f75540) on tqpair(0x1f19de0): expected_datao=0, payload_size=4096 00:30:14.164 [2024-12-15 05:30:27.589776] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589782] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589785] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589795] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.164 [2024-12-15 05:30:27.589801] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.164 [2024-12-15 05:30:27.589804] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589807] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f75540) on tqpair=0x1f19de0 00:30:14.164 [2024-12-15 05:30:27.589825] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:30:14.164 [2024-12-15 05:30:27.589836] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.589845] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.589851] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589854] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f19de0) 00:30:14.164 [2024-12-15 05:30:27.589860] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.164 [2024-12-15 05:30:27.589870] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f75540, cid 4, qid 0 00:30:14.164 [2024-12-15 05:30:27.589969] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:14.164 [2024-12-15 05:30:27.589974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:14.164 [2024-12-15 05:30:27.589977] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589980] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f19de0): datao=0, datal=4096, cccid=4 00:30:14.164 [2024-12-15 05:30:27.589984] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f75540) on tqpair(0x1f19de0): expected_datao=0, payload_size=4096 00:30:14.164 [2024-12-15 05:30:27.589988] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.589997] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.590001] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.590015] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.164 [2024-12-15 05:30:27.590021] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.164 [2024-12-15 05:30:27.590024] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.590027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f75540) on tqpair=0x1f19de0 00:30:14.164 [2024-12-15 05:30:27.590039] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.590048] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.590054] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.590057] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f19de0) 00:30:14.164 [2024-12-15 05:30:27.590063] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.164 [2024-12-15 05:30:27.590074] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f75540, cid 4, qid 0 00:30:14.164 [2024-12-15 05:30:27.590147] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:14.164 [2024-12-15 05:30:27.590153] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:14.164 [2024-12-15 05:30:27.590156] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.590159] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f19de0): datao=0, datal=4096, cccid=4 00:30:14.164 [2024-12-15 05:30:27.590163] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f75540) on tqpair(0x1f19de0): expected_datao=0, payload_size=4096 00:30:14.164 [2024-12-15 05:30:27.590167] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.590172] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.590175] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.590192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.164 [2024-12-15 05:30:27.590197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.164 [2024-12-15 05:30:27.590201] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.164 [2024-12-15 05:30:27.590204] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f75540) on tqpair=0x1f19de0 00:30:14.164 [2024-12-15 05:30:27.590210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.590218] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.590226] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.590232] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.590236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.590241] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:30:14.164 [2024-12-15 05:30:27.590246] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:30:14.165 [2024-12-15 05:30:27.590250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:30:14.165 [2024-12-15 05:30:27.590254] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:30:14.165 [2024-12-15 05:30:27.590267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590271] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f19de0) 00:30:14.165 [2024-12-15 05:30:27.590276] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.165 [2024-12-15 05:30:27.590282] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590285] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590288] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f19de0) 00:30:14.165 [2024-12-15 05:30:27.590294] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:30:14.165 [2024-12-15 05:30:27.590307] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f75540, cid 4, qid 0 00:30:14.165 [2024-12-15 05:30:27.590311] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f756c0, cid 5, qid 0 00:30:14.165 [2024-12-15 05:30:27.590398] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.165 [2024-12-15 05:30:27.590404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.165 [2024-12-15 05:30:27.590409] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590412] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f75540) on tqpair=0x1f19de0 00:30:14.165 [2024-12-15 05:30:27.590418] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.165 [2024-12-15 05:30:27.590422] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.165 [2024-12-15 05:30:27.590426] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590429] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f756c0) on tqpair=0x1f19de0 00:30:14.165 [2024-12-15 05:30:27.590437] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590440] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f19de0) 00:30:14.165 [2024-12-15 05:30:27.590446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.165 [2024-12-15 05:30:27.590456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f756c0, cid 5, qid 0 00:30:14.165 [2024-12-15 05:30:27.590518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.165 [2024-12-15 05:30:27.590523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.165 [2024-12-15 05:30:27.590527] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f756c0) on tqpair=0x1f19de0 00:30:14.165 [2024-12-15 05:30:27.590537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f19de0) 00:30:14.165 [2024-12-15 05:30:27.590546] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.165 [2024-12-15 05:30:27.590555] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f756c0, cid 5, qid 0 00:30:14.165 [2024-12-15 05:30:27.590628] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.165 [2024-12-15 05:30:27.590633] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.165 [2024-12-15 05:30:27.590636] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590639] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f756c0) on tqpair=0x1f19de0 00:30:14.165 [2024-12-15 05:30:27.590648] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590651] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f19de0) 00:30:14.165 [2024-12-15 05:30:27.590657] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.165 [2024-12-15 05:30:27.590667] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f756c0, cid 5, qid 0 00:30:14.165 [2024-12-15 05:30:27.590727] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.165 [2024-12-15 05:30:27.590733] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.165 [2024-12-15 05:30:27.590736] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590739] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f756c0) on tqpair=0x1f19de0 00:30:14.165 [2024-12-15 05:30:27.590751] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590755] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1f19de0) 00:30:14.165 [2024-12-15 05:30:27.590761] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.165 [2024-12-15 05:30:27.590767] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590770] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1f19de0) 00:30:14.165 [2024-12-15 05:30:27.590775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.165 [2024-12-15 05:30:27.590783] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590786] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x1f19de0) 00:30:14.165 [2024-12-15 05:30:27.590791] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.165 [2024-12-15 05:30:27.590798] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590801] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f19de0) 00:30:14.165 [2024-12-15 05:30:27.590806] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.165 [2024-12-15 05:30:27.590816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f756c0, cid 5, qid 0 00:30:14.165 [2024-12-15 05:30:27.590821] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f75540, cid 4, qid 0 00:30:14.165 [2024-12-15 05:30:27.590825] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f75840, cid 6, qid 0 00:30:14.165 [2024-12-15 05:30:27.590829] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f759c0, cid 7, qid 0 00:30:14.165 [2024-12-15 05:30:27.590961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:14.165 [2024-12-15 05:30:27.590967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:14.165 [2024-12-15 05:30:27.590970] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.590973] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f19de0): datao=0, datal=8192, cccid=5 00:30:14.165 [2024-12-15 05:30:27.590977] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f756c0) on tqpair(0x1f19de0): expected_datao=0, payload_size=8192 00:30:14.165 [2024-12-15 05:30:27.590981] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591000] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591004] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591012] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:14.165 [2024-12-15 05:30:27.591017] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:14.165 [2024-12-15 05:30:27.591020] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591023] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f19de0): datao=0, datal=512, cccid=4 00:30:14.165 [2024-12-15 05:30:27.591027] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f75540) on tqpair(0x1f19de0): expected_datao=0, payload_size=512 00:30:14.165 [2024-12-15 05:30:27.591031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591036] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591039] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591044] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:14.165 [2024-12-15 05:30:27.591049] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:14.165 [2024-12-15 05:30:27.591052] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591055] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f19de0): datao=0, datal=512, cccid=6 00:30:14.165 [2024-12-15 05:30:27.591058] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f75840) on tqpair(0x1f19de0): expected_datao=0, payload_size=512 00:30:14.165 [2024-12-15 05:30:27.591062] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591067] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591070] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591077] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:30:14.165 [2024-12-15 05:30:27.591082] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:30:14.165 [2024-12-15 05:30:27.591085] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591088] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1f19de0): datao=0, datal=4096, cccid=7 00:30:14.165 [2024-12-15 05:30:27.591092] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x1f759c0) on tqpair(0x1f19de0): expected_datao=0, payload_size=4096 00:30:14.165 [2024-12-15 05:30:27.591096] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591101] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591104] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591112] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.165 [2024-12-15 05:30:27.591117] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.165 [2024-12-15 05:30:27.591119] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.165 [2024-12-15 05:30:27.591123] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f756c0) on tqpair=0x1f19de0 00:30:14.165 [2024-12-15 05:30:27.591136] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.165 [2024-12-15 05:30:27.591141] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.165 [2024-12-15 05:30:27.591144] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.166 [2024-12-15 05:30:27.591147] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f75540) on tqpair=0x1f19de0 00:30:14.166 [2024-12-15 05:30:27.591156] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.166 [2024-12-15 05:30:27.591161] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.166 [2024-12-15 05:30:27.591164] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.166 [2024-12-15 05:30:27.591168] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f75840) on tqpair=0x1f19de0 00:30:14.166 [2024-12-15 05:30:27.591174] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.166 [2024-12-15 05:30:27.591179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.166 [2024-12-15 05:30:27.591182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.166 [2024-12-15 05:30:27.591185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f759c0) on tqpair=0x1f19de0 00:30:14.166 ===================================================== 00:30:14.166 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:14.166 ===================================================== 00:30:14.166 Controller Capabilities/Features 00:30:14.166 ================================ 00:30:14.166 Vendor ID: 8086 00:30:14.166 Subsystem Vendor ID: 8086 00:30:14.166 Serial Number: SPDK00000000000001 00:30:14.166 Model Number: SPDK bdev Controller 00:30:14.166 Firmware Version: 25.01 00:30:14.166 Recommended Arb Burst: 6 00:30:14.166 IEEE OUI Identifier: e4 d2 5c 00:30:14.166 Multi-path I/O 00:30:14.166 May have multiple subsystem ports: Yes 00:30:14.166 May have multiple controllers: Yes 00:30:14.166 Associated with SR-IOV VF: No 00:30:14.166 Max Data Transfer Size: 131072 00:30:14.166 Max Number of Namespaces: 32 00:30:14.166 Max Number of I/O Queues: 127 00:30:14.166 NVMe Specification Version (VS): 1.3 00:30:14.166 NVMe Specification Version (Identify): 1.3 00:30:14.166 Maximum Queue Entries: 128 00:30:14.166 Contiguous Queues Required: Yes 00:30:14.166 Arbitration Mechanisms Supported 00:30:14.166 Weighted Round Robin: Not Supported 00:30:14.166 Vendor Specific: Not Supported 00:30:14.166 Reset Timeout: 15000 ms 00:30:14.166 Doorbell Stride: 4 bytes 00:30:14.166 NVM Subsystem Reset: Not Supported 00:30:14.166 Command Sets Supported 00:30:14.166 NVM Command Set: Supported 00:30:14.166 Boot Partition: Not Supported 00:30:14.166 Memory Page Size Minimum: 4096 bytes 00:30:14.166 Memory Page Size Maximum: 4096 bytes 00:30:14.166 Persistent Memory Region: Not Supported 00:30:14.166 Optional Asynchronous Events Supported 00:30:14.166 Namespace Attribute Notices: Supported 00:30:14.166 Firmware Activation Notices: Not Supported 00:30:14.166 ANA Change Notices: Not Supported 00:30:14.166 PLE Aggregate Log Change Notices: Not Supported 00:30:14.166 LBA Status Info Alert Notices: Not Supported 00:30:14.166 EGE Aggregate Log Change Notices: Not Supported 00:30:14.166 Normal NVM Subsystem Shutdown event: Not Supported 00:30:14.166 Zone Descriptor Change Notices: Not Supported 00:30:14.166 Discovery Log Change Notices: Not Supported 00:30:14.166 Controller Attributes 00:30:14.166 128-bit Host Identifier: Supported 00:30:14.166 Non-Operational Permissive Mode: Not Supported 00:30:14.166 NVM Sets: Not Supported 00:30:14.166 Read Recovery Levels: Not Supported 00:30:14.166 Endurance Groups: Not Supported 00:30:14.166 Predictable Latency Mode: Not Supported 00:30:14.166 Traffic Based Keep ALive: Not Supported 00:30:14.166 Namespace Granularity: Not Supported 00:30:14.166 SQ Associations: Not Supported 00:30:14.166 UUID List: Not Supported 00:30:14.166 Multi-Domain Subsystem: Not Supported 00:30:14.166 Fixed Capacity Management: Not Supported 00:30:14.166 Variable Capacity Management: Not Supported 00:30:14.166 Delete Endurance Group: Not Supported 00:30:14.166 Delete NVM Set: Not Supported 00:30:14.166 Extended LBA Formats Supported: Not Supported 00:30:14.166 Flexible Data Placement Supported: Not Supported 00:30:14.166 00:30:14.166 Controller Memory Buffer Support 00:30:14.166 ================================ 00:30:14.166 Supported: No 00:30:14.166 00:30:14.166 Persistent Memory Region Support 00:30:14.166 ================================ 00:30:14.166 Supported: No 00:30:14.166 00:30:14.166 Admin Command Set Attributes 00:30:14.166 ============================ 00:30:14.166 Security Send/Receive: Not Supported 00:30:14.166 Format NVM: Not Supported 00:30:14.166 Firmware Activate/Download: Not Supported 00:30:14.166 Namespace Management: Not Supported 00:30:14.166 Device Self-Test: Not Supported 00:30:14.166 Directives: Not Supported 00:30:14.166 NVMe-MI: Not Supported 00:30:14.166 Virtualization Management: Not Supported 00:30:14.166 Doorbell Buffer Config: Not Supported 00:30:14.166 Get LBA Status Capability: Not Supported 00:30:14.166 Command & Feature Lockdown Capability: Not Supported 00:30:14.166 Abort Command Limit: 4 00:30:14.166 Async Event Request Limit: 4 00:30:14.166 Number of Firmware Slots: N/A 00:30:14.166 Firmware Slot 1 Read-Only: N/A 00:30:14.166 Firmware Activation Without Reset: N/A 00:30:14.166 Multiple Update Detection Support: N/A 00:30:14.166 Firmware Update Granularity: No Information Provided 00:30:14.166 Per-Namespace SMART Log: No 00:30:14.166 Asymmetric Namespace Access Log Page: Not Supported 00:30:14.166 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:30:14.166 Command Effects Log Page: Supported 00:30:14.166 Get Log Page Extended Data: Supported 00:30:14.166 Telemetry Log Pages: Not Supported 00:30:14.166 Persistent Event Log Pages: Not Supported 00:30:14.166 Supported Log Pages Log Page: May Support 00:30:14.166 Commands Supported & Effects Log Page: Not Supported 00:30:14.166 Feature Identifiers & Effects Log Page:May Support 00:30:14.166 NVMe-MI Commands & Effects Log Page: May Support 00:30:14.166 Data Area 4 for Telemetry Log: Not Supported 00:30:14.166 Error Log Page Entries Supported: 128 00:30:14.166 Keep Alive: Supported 00:30:14.166 Keep Alive Granularity: 10000 ms 00:30:14.166 00:30:14.166 NVM Command Set Attributes 00:30:14.166 ========================== 00:30:14.166 Submission Queue Entry Size 00:30:14.166 Max: 64 00:30:14.166 Min: 64 00:30:14.166 Completion Queue Entry Size 00:30:14.166 Max: 16 00:30:14.166 Min: 16 00:30:14.166 Number of Namespaces: 32 00:30:14.166 Compare Command: Supported 00:30:14.166 Write Uncorrectable Command: Not Supported 00:30:14.166 Dataset Management Command: Supported 00:30:14.166 Write Zeroes Command: Supported 00:30:14.166 Set Features Save Field: Not Supported 00:30:14.166 Reservations: Supported 00:30:14.166 Timestamp: Not Supported 00:30:14.166 Copy: Supported 00:30:14.166 Volatile Write Cache: Present 00:30:14.166 Atomic Write Unit (Normal): 1 00:30:14.166 Atomic Write Unit (PFail): 1 00:30:14.166 Atomic Compare & Write Unit: 1 00:30:14.166 Fused Compare & Write: Supported 00:30:14.166 Scatter-Gather List 00:30:14.166 SGL Command Set: Supported 00:30:14.166 SGL Keyed: Supported 00:30:14.166 SGL Bit Bucket Descriptor: Not Supported 00:30:14.166 SGL Metadata Pointer: Not Supported 00:30:14.166 Oversized SGL: Not Supported 00:30:14.166 SGL Metadata Address: Not Supported 00:30:14.166 SGL Offset: Supported 00:30:14.166 Transport SGL Data Block: Not Supported 00:30:14.166 Replay Protected Memory Block: Not Supported 00:30:14.166 00:30:14.166 Firmware Slot Information 00:30:14.166 ========================= 00:30:14.166 Active slot: 1 00:30:14.166 Slot 1 Firmware Revision: 25.01 00:30:14.166 00:30:14.166 00:30:14.166 Commands Supported and Effects 00:30:14.166 ============================== 00:30:14.166 Admin Commands 00:30:14.166 -------------- 00:30:14.166 Get Log Page (02h): Supported 00:30:14.166 Identify (06h): Supported 00:30:14.166 Abort (08h): Supported 00:30:14.166 Set Features (09h): Supported 00:30:14.166 Get Features (0Ah): Supported 00:30:14.166 Asynchronous Event Request (0Ch): Supported 00:30:14.166 Keep Alive (18h): Supported 00:30:14.166 I/O Commands 00:30:14.166 ------------ 00:30:14.166 Flush (00h): Supported LBA-Change 00:30:14.166 Write (01h): Supported LBA-Change 00:30:14.166 Read (02h): Supported 00:30:14.166 Compare (05h): Supported 00:30:14.166 Write Zeroes (08h): Supported LBA-Change 00:30:14.166 Dataset Management (09h): Supported LBA-Change 00:30:14.166 Copy (19h): Supported LBA-Change 00:30:14.166 00:30:14.166 Error Log 00:30:14.166 ========= 00:30:14.166 00:30:14.166 Arbitration 00:30:14.166 =========== 00:30:14.166 Arbitration Burst: 1 00:30:14.166 00:30:14.166 Power Management 00:30:14.166 ================ 00:30:14.166 Number of Power States: 1 00:30:14.166 Current Power State: Power State #0 00:30:14.166 Power State #0: 00:30:14.166 Max Power: 0.00 W 00:30:14.166 Non-Operational State: Operational 00:30:14.166 Entry Latency: Not Reported 00:30:14.166 Exit Latency: Not Reported 00:30:14.166 Relative Read Throughput: 0 00:30:14.166 Relative Read Latency: 0 00:30:14.166 Relative Write Throughput: 0 00:30:14.166 Relative Write Latency: 0 00:30:14.166 Idle Power: Not Reported 00:30:14.167 Active Power: Not Reported 00:30:14.167 Non-Operational Permissive Mode: Not Supported 00:30:14.167 00:30:14.167 Health Information 00:30:14.167 ================== 00:30:14.167 Critical Warnings: 00:30:14.167 Available Spare Space: OK 00:30:14.167 Temperature: OK 00:30:14.167 Device Reliability: OK 00:30:14.167 Read Only: No 00:30:14.167 Volatile Memory Backup: OK 00:30:14.167 Current Temperature: 0 Kelvin (-273 Celsius) 00:30:14.167 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:30:14.167 Available Spare: 0% 00:30:14.167 Available Spare Threshold: 0% 00:30:14.167 Life Percentage Used:[2024-12-15 05:30:27.591267] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591272] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x1f19de0) 00:30:14.167 [2024-12-15 05:30:27.591278] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.167 [2024-12-15 05:30:27.591289] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f759c0, cid 7, qid 0 00:30:14.167 [2024-12-15 05:30:27.591363] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.167 [2024-12-15 05:30:27.591369] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.167 [2024-12-15 05:30:27.591372] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591375] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f759c0) on tqpair=0x1f19de0 00:30:14.167 [2024-12-15 05:30:27.591402] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:30:14.167 [2024-12-15 05:30:27.591411] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f74f40) on tqpair=0x1f19de0 00:30:14.167 [2024-12-15 05:30:27.591417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.167 [2024-12-15 05:30:27.591421] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f750c0) on tqpair=0x1f19de0 00:30:14.167 [2024-12-15 05:30:27.591425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.167 [2024-12-15 05:30:27.591431] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f75240) on tqpair=0x1f19de0 00:30:14.167 [2024-12-15 05:30:27.591435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.167 [2024-12-15 05:30:27.591440] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f753c0) on tqpair=0x1f19de0 00:30:14.167 [2024-12-15 05:30:27.591444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:30:14.167 [2024-12-15 05:30:27.591450] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591453] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591457] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f19de0) 00:30:14.167 [2024-12-15 05:30:27.591463] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.167 [2024-12-15 05:30:27.591474] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f753c0, cid 3, qid 0 00:30:14.167 [2024-12-15 05:30:27.591536] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.167 [2024-12-15 05:30:27.591541] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.167 [2024-12-15 05:30:27.591545] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591548] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f753c0) on tqpair=0x1f19de0 00:30:14.167 [2024-12-15 05:30:27.591554] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591557] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591560] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f19de0) 00:30:14.167 [2024-12-15 05:30:27.591565] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.167 [2024-12-15 05:30:27.591577] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f753c0, cid 3, qid 0 00:30:14.167 [2024-12-15 05:30:27.591644] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.167 [2024-12-15 05:30:27.591649] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.167 [2024-12-15 05:30:27.591652] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591655] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f753c0) on tqpair=0x1f19de0 00:30:14.167 [2024-12-15 05:30:27.591660] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:30:14.167 [2024-12-15 05:30:27.591664] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:30:14.167 [2024-12-15 05:30:27.591671] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591678] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f19de0) 00:30:14.167 [2024-12-15 05:30:27.591684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.167 [2024-12-15 05:30:27.591693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f753c0, cid 3, qid 0 00:30:14.167 [2024-12-15 05:30:27.591755] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.167 [2024-12-15 05:30:27.591761] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.167 [2024-12-15 05:30:27.591764] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591767] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f753c0) on tqpair=0x1f19de0 00:30:14.167 [2024-12-15 05:30:27.591775] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591779] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591782] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f19de0) 00:30:14.167 [2024-12-15 05:30:27.591789] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.167 [2024-12-15 05:30:27.591799] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f753c0, cid 3, qid 0 00:30:14.167 [2024-12-15 05:30:27.591864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.167 [2024-12-15 05:30:27.591869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.167 [2024-12-15 05:30:27.591872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f753c0) on tqpair=0x1f19de0 00:30:14.167 [2024-12-15 05:30:27.591884] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f19de0) 00:30:14.167 [2024-12-15 05:30:27.591896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.167 [2024-12-15 05:30:27.591906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f753c0, cid 3, qid 0 00:30:14.167 [2024-12-15 05:30:27.591963] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.167 [2024-12-15 05:30:27.591969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.167 [2024-12-15 05:30:27.591972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f753c0) on tqpair=0x1f19de0 00:30:14.167 [2024-12-15 05:30:27.591983] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591987] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.591990] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1f19de0) 00:30:14.167 [2024-12-15 05:30:27.596002] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:30:14.167 [2024-12-15 05:30:27.596015] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1f753c0, cid 3, qid 0 00:30:14.167 [2024-12-15 05:30:27.596167] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:30:14.167 [2024-12-15 05:30:27.596173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:30:14.167 [2024-12-15 05:30:27.596176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:30:14.167 [2024-12-15 05:30:27.596179] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1f753c0) on tqpair=0x1f19de0 00:30:14.167 [2024-12-15 05:30:27.596185] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 4 milliseconds 00:30:14.167 0% 00:30:14.167 Data Units Read: 0 00:30:14.167 Data Units Written: 0 00:30:14.167 Host Read Commands: 0 00:30:14.167 Host Write Commands: 0 00:30:14.167 Controller Busy Time: 0 minutes 00:30:14.167 Power Cycles: 0 00:30:14.167 Power On Hours: 0 hours 00:30:14.167 Unsafe Shutdowns: 0 00:30:14.167 Unrecoverable Media Errors: 0 00:30:14.167 Lifetime Error Log Entries: 0 00:30:14.167 Warning Temperature Time: 0 minutes 00:30:14.167 Critical Temperature Time: 0 minutes 00:30:14.167 00:30:14.167 Number of Queues 00:30:14.167 ================ 00:30:14.167 Number of I/O Submission Queues: 127 00:30:14.167 Number of I/O Completion Queues: 127 00:30:14.167 00:30:14.167 Active Namespaces 00:30:14.167 ================= 00:30:14.167 Namespace ID:1 00:30:14.167 Error Recovery Timeout: Unlimited 00:30:14.167 Command Set Identifier: NVM (00h) 00:30:14.167 Deallocate: Supported 00:30:14.167 Deallocated/Unwritten Error: Not Supported 00:30:14.167 Deallocated Read Value: Unknown 00:30:14.167 Deallocate in Write Zeroes: Not Supported 00:30:14.167 Deallocated Guard Field: 0xFFFF 00:30:14.167 Flush: Supported 00:30:14.167 Reservation: Supported 00:30:14.167 Namespace Sharing Capabilities: Multiple Controllers 00:30:14.167 Size (in LBAs): 131072 (0GiB) 00:30:14.167 Capacity (in LBAs): 131072 (0GiB) 00:30:14.167 Utilization (in LBAs): 131072 (0GiB) 00:30:14.167 NGUID: ABCDEF0123456789ABCDEF0123456789 00:30:14.167 EUI64: ABCDEF0123456789 00:30:14.167 UUID: ed9ec3bb-51b5-4281-9c8f-9069e8722fca 00:30:14.167 Thin Provisioning: Not Supported 00:30:14.167 Per-NS Atomic Units: Yes 00:30:14.167 Atomic Boundary Size (Normal): 0 00:30:14.167 Atomic Boundary Size (PFail): 0 00:30:14.167 Atomic Boundary Offset: 0 00:30:14.167 Maximum Single Source Range Length: 65535 00:30:14.167 Maximum Copy Length: 65535 00:30:14.167 Maximum Source Range Count: 1 00:30:14.167 NGUID/EUI64 Never Reused: No 00:30:14.167 Namespace Write Protected: No 00:30:14.167 Number of LBA Formats: 1 00:30:14.168 Current LBA Format: LBA Format #00 00:30:14.168 LBA Format #00: Data Size: 512 Metadata Size: 0 00:30:14.168 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@516 -- # nvmfcleanup 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # sync 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set +e 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # for i in {1..20} 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:30:14.168 rmmod nvme_tcp 00:30:14.168 rmmod nvme_fabrics 00:30:14.168 rmmod nvme_keyring 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@128 -- # set -e 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # return 0 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@517 -- # '[' -n 450028 ']' 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@518 -- # killprocess 450028 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 450028 ']' 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 450028 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 450028 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 450028' 00:30:14.168 killing process with pid 450028 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 450028 00:30:14.168 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 450028 00:30:14.427 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:30:14.427 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:30:14.427 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:30:14.427 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # iptr 00:30:14.427 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-save 00:30:14.427 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:30:14.427 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@791 -- # iptables-restore 00:30:14.427 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:30:14.427 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # remove_spdk_ns 00:30:14.427 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:14.427 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:14.427 05:30:27 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.331 05:30:29 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:30:16.331 00:30:16.331 real 0m9.167s 00:30:16.331 user 0m5.141s 00:30:16.331 sys 0m4.778s 00:30:16.331 05:30:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:16.331 05:30:29 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:30:16.331 ************************************ 00:30:16.331 END TEST nvmf_identify 00:30:16.331 ************************************ 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:30:16.591 ************************************ 00:30:16.591 START TEST nvmf_perf 00:30:16.591 ************************************ 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:30:16.591 * Looking for test storage... 00:30:16.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lcov --version 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:16.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.591 --rc genhtml_branch_coverage=1 00:30:16.591 --rc genhtml_function_coverage=1 00:30:16.591 --rc genhtml_legend=1 00:30:16.591 --rc geninfo_all_blocks=1 00:30:16.591 --rc geninfo_unexecuted_blocks=1 00:30:16.591 00:30:16.591 ' 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:16.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.591 --rc genhtml_branch_coverage=1 00:30:16.591 --rc genhtml_function_coverage=1 00:30:16.591 --rc genhtml_legend=1 00:30:16.591 --rc geninfo_all_blocks=1 00:30:16.591 --rc geninfo_unexecuted_blocks=1 00:30:16.591 00:30:16.591 ' 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:16.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.591 --rc genhtml_branch_coverage=1 00:30:16.591 --rc genhtml_function_coverage=1 00:30:16.591 --rc genhtml_legend=1 00:30:16.591 --rc geninfo_all_blocks=1 00:30:16.591 --rc geninfo_unexecuted_blocks=1 00:30:16.591 00:30:16.591 ' 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:16.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:16.591 --rc genhtml_branch_coverage=1 00:30:16.591 --rc genhtml_function_coverage=1 00:30:16.591 --rc genhtml_legend=1 00:30:16.591 --rc geninfo_all_blocks=1 00:30:16.591 --rc geninfo_unexecuted_blocks=1 00:30:16.591 00:30:16.591 ' 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:16.591 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # : 0 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:30:16.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@476 -- # prepare_net_devs 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # xtrace_disable 00:30:16.592 05:30:30 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # pci_devs=() 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # local -a pci_devs 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # pci_drivers=() 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # net_devs=() 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@319 -- # local -ga net_devs 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # e810=() 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # local -ga e810 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # x722=() 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # local -ga x722 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # mlx=() 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@322 -- # local -ga mlx 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:30:23.164 Found 0000:af:00.0 (0x8086 - 0x159b) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:30:23.164 Found 0000:af:00.1 (0x8086 - 0x159b) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.164 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:30:23.165 Found net devices under 0000:af:00.0: cvl_0_0 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:30:23.165 Found net devices under 0000:af:00.1: cvl_0_1 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@442 -- # is_hw=yes 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:30:23.165 05:30:35 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:30:23.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.232 ms 00:30:23.165 00:30:23.165 --- 10.0.0.2 ping statistics --- 00:30:23.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.165 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:30:23.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.188 ms 00:30:23.165 00:30:23.165 --- 10.0.0.1 ping statistics --- 00:30:23.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.165 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # return 0 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@509 -- # nvmfpid=453729 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@510 -- # waitforlisten 453729 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 453729 ']' 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:23.165 [2024-12-15 05:30:36.196225] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:30:23.165 [2024-12-15 05:30:36.196267] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.165 [2024-12-15 05:30:36.272624] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:23.165 [2024-12-15 05:30:36.295678] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.165 [2024-12-15 05:30:36.295712] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.165 [2024-12-15 05:30:36.295720] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.165 [2024-12-15 05:30:36.295725] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.165 [2024-12-15 05:30:36.295730] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.165 [2024-12-15 05:30:36.297166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.165 [2024-12-15 05:30:36.297262] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:30:23.165 [2024-12-15 05:30:36.297368] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:23.165 [2024-12-15 05:30:36.297369] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:30:23.165 05:30:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:30:26.454 05:30:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:30:26.454 05:30:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:30:26.454 05:30:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:30:26.454 05:30:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:26.454 05:30:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:30:26.454 05:30:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:30:26.454 05:30:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:30:26.454 05:30:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:30:26.454 05:30:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:30:26.454 [2024-12-15 05:30:40.068622] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:26.454 05:30:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:26.713 05:30:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:26.713 05:30:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:26.972 05:30:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:30:26.972 05:30:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:30:27.231 05:30:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:27.231 [2024-12-15 05:30:40.871589] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:27.231 05:30:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:27.490 05:30:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:30:27.490 05:30:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:27.490 05:30:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:30:27.490 05:30:41 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:30:28.868 Initializing NVMe Controllers 00:30:28.868 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:30:28.868 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:30:28.868 Initialization complete. Launching workers. 00:30:28.868 ======================================================== 00:30:28.868 Latency(us) 00:30:28.868 Device Information : IOPS MiB/s Average min max 00:30:28.868 PCIE (0000:5e:00.0) NSID 1 from core 0: 98236.03 383.73 325.34 40.15 4672.49 00:30:28.868 ======================================================== 00:30:28.868 Total : 98236.03 383.73 325.34 40.15 4672.49 00:30:28.868 00:30:28.868 05:30:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:30.246 Initializing NVMe Controllers 00:30:30.246 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:30.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:30.246 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:30.246 Initialization complete. Launching workers. 00:30:30.246 ======================================================== 00:30:30.246 Latency(us) 00:30:30.246 Device Information : IOPS MiB/s Average min max 00:30:30.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 100.00 0.39 10229.27 105.21 44689.52 00:30:30.246 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 67.00 0.26 15100.66 6377.71 47905.62 00:30:30.246 ======================================================== 00:30:30.246 Total : 167.00 0.65 12183.66 105.21 47905.62 00:30:30.246 00:30:30.246 05:30:43 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:31.622 Initializing NVMe Controllers 00:30:31.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:31.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:31.622 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:31.622 Initialization complete. Launching workers. 00:30:31.622 ======================================================== 00:30:31.622 Latency(us) 00:30:31.622 Device Information : IOPS MiB/s Average min max 00:30:31.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11262.14 43.99 2840.19 374.45 7172.77 00:30:31.622 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3822.45 14.93 8398.33 5774.64 16001.60 00:30:31.622 ======================================================== 00:30:31.622 Total : 15084.59 58.92 4248.63 374.45 16001.60 00:30:31.622 00:30:31.622 05:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:30:31.622 05:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:30:31.622 05:30:44 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:34.157 Initializing NVMe Controllers 00:30:34.157 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:34.157 Controller IO queue size 128, less than required. 00:30:34.157 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:34.157 Controller IO queue size 128, less than required. 00:30:34.157 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:34.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:34.157 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:34.157 Initialization complete. Launching workers. 00:30:34.157 ======================================================== 00:30:34.157 Latency(us) 00:30:34.157 Device Information : IOPS MiB/s Average min max 00:30:34.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1844.06 461.02 70549.48 48435.78 122384.07 00:30:34.157 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 597.74 149.43 223482.48 56315.90 327140.68 00:30:34.157 ======================================================== 00:30:34.157 Total : 2441.80 610.45 107986.53 48435.78 327140.68 00:30:34.157 00:30:34.157 05:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:30:34.157 No valid NVMe controllers or AIO or URING devices found 00:30:34.157 Initializing NVMe Controllers 00:30:34.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:34.158 Controller IO queue size 128, less than required. 00:30:34.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:34.158 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:30:34.158 Controller IO queue size 128, less than required. 00:30:34.158 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:34.158 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:30:34.158 WARNING: Some requested NVMe devices were skipped 00:30:34.158 05:30:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:30:36.693 Initializing NVMe Controllers 00:30:36.693 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:36.693 Controller IO queue size 128, less than required. 00:30:36.693 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.693 Controller IO queue size 128, less than required. 00:30:36.693 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:30:36.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:36.693 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:30:36.693 Initialization complete. Launching workers. 00:30:36.693 00:30:36.693 ==================== 00:30:36.693 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:30:36.693 TCP transport: 00:30:36.693 polls: 13171 00:30:36.693 idle_polls: 9212 00:30:36.693 sock_completions: 3959 00:30:36.693 nvme_completions: 6603 00:30:36.693 submitted_requests: 9876 00:30:36.693 queued_requests: 1 00:30:36.693 00:30:36.693 ==================== 00:30:36.693 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:30:36.693 TCP transport: 00:30:36.693 polls: 13246 00:30:36.693 idle_polls: 8249 00:30:36.693 sock_completions: 4997 00:30:36.693 nvme_completions: 6377 00:30:36.693 submitted_requests: 9546 00:30:36.693 queued_requests: 1 00:30:36.693 ======================================================== 00:30:36.693 Latency(us) 00:30:36.693 Device Information : IOPS MiB/s Average min max 00:30:36.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1647.77 411.94 80292.00 46627.12 137307.69 00:30:36.693 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1591.37 397.84 81003.57 48533.40 140903.96 00:30:36.693 ======================================================== 00:30:36.693 Total : 3239.14 809.78 80641.59 46627.12 140903.96 00:30:36.693 00:30:36.693 05:30:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:30:36.693 05:30:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:36.952 05:30:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 1 -eq 1 ']' 00:30:36.952 05:30:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@71 -- # '[' -n 0000:5e:00.0 ']' 00:30:36.952 05:30:50 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore Nvme0n1 lvs_0 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@72 -- # ls_guid=c8548c87-8afd-430c-8c0c-2d8623b16d41 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@73 -- # get_lvs_free_mb c8548c87-8afd-430c-8c0c-2d8623b16d41 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=c8548c87-8afd-430c-8c0c-2d8623b16d41 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:40.241 { 00:30:40.241 "uuid": "c8548c87-8afd-430c-8c0c-2d8623b16d41", 00:30:40.241 "name": "lvs_0", 00:30:40.241 "base_bdev": "Nvme0n1", 00:30:40.241 "total_data_clusters": 238234, 00:30:40.241 "free_clusters": 238234, 00:30:40.241 "block_size": 512, 00:30:40.241 "cluster_size": 4194304 00:30:40.241 } 00:30:40.241 ]' 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="c8548c87-8afd-430c-8c0c-2d8623b16d41") .free_clusters' 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=238234 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="c8548c87-8afd-430c-8c0c-2d8623b16d41") .cluster_size' 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=952936 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 952936 00:30:40.241 952936 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@77 -- # '[' 952936 -gt 20480 ']' 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@78 -- # free_mb=20480 00:30:40.241 05:30:53 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c8548c87-8afd-430c-8c0c-2d8623b16d41 lbd_0 20480 00:30:40.808 05:30:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@80 -- # lb_guid=1a1bf300-22fc-4ff2-8174-07cd1155a20d 00:30:40.808 05:30:54 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore 1a1bf300-22fc-4ff2-8174-07cd1155a20d lvs_n_0 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@83 -- # ls_nested_guid=10da769c-75b4-4479-aff1-65f954f4dcc4 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@84 -- # get_lvs_free_mb 10da769c-75b4-4479-aff1-65f954f4dcc4 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1368 -- # local lvs_uuid=10da769c-75b4-4479-aff1-65f954f4dcc4 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1369 -- # local lvs_info 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1370 -- # local fc 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1371 -- # local cs 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:30:41.744 { 00:30:41.744 "uuid": "c8548c87-8afd-430c-8c0c-2d8623b16d41", 00:30:41.744 "name": "lvs_0", 00:30:41.744 "base_bdev": "Nvme0n1", 00:30:41.744 "total_data_clusters": 238234, 00:30:41.744 "free_clusters": 233114, 00:30:41.744 "block_size": 512, 00:30:41.744 "cluster_size": 4194304 00:30:41.744 }, 00:30:41.744 { 00:30:41.744 "uuid": "10da769c-75b4-4479-aff1-65f954f4dcc4", 00:30:41.744 "name": "lvs_n_0", 00:30:41.744 "base_bdev": "1a1bf300-22fc-4ff2-8174-07cd1155a20d", 00:30:41.744 "total_data_clusters": 5114, 00:30:41.744 "free_clusters": 5114, 00:30:41.744 "block_size": 512, 00:30:41.744 "cluster_size": 4194304 00:30:41.744 } 00:30:41.744 ]' 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="10da769c-75b4-4479-aff1-65f954f4dcc4") .free_clusters' 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1373 -- # fc=5114 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="10da769c-75b4-4479-aff1-65f954f4dcc4") .cluster_size' 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1374 -- # cs=4194304 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1377 -- # free_mb=20456 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1378 -- # echo 20456 00:30:41.744 20456 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@85 -- # '[' 20456 -gt 20480 ']' 00:30:41.744 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 10da769c-75b4-4479-aff1-65f954f4dcc4 lbd_nest_0 20456 00:30:42.003 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@88 -- # lb_nested_guid=965a526b-008c-4a7f-9e6f-abc28fa20807 00:30:42.003 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:42.261 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@90 -- # for bdev in $lb_nested_guid 00:30:42.261 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 965a526b-008c-4a7f-9e6f-abc28fa20807 00:30:42.520 05:30:55 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:42.520 05:30:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@95 -- # qd_depth=("1" "32" "128") 00:30:42.520 05:30:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@96 -- # io_size=("512" "131072") 00:30:42.520 05:30:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:30:42.520 05:30:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:42.520 05:30:56 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:30:54.727 Initializing NVMe Controllers 00:30:54.727 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:30:54.727 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:30:54.727 Initialization complete. Launching workers. 00:30:54.727 ======================================================== 00:30:54.727 Latency(us) 00:30:54.727 Device Information : IOPS MiB/s Average min max 00:30:54.727 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 49.20 0.02 20397.63 120.01 45586.16 00:30:54.727 ======================================================== 00:30:54.727 Total : 49.20 0.02 20397.63 120.01 45586.16 00:30:54.727 00:30:54.727 05:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:30:54.727 05:31:06 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:04.707 Initializing NVMe Controllers 00:31:04.707 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:04.707 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:04.707 Initialization complete. Launching workers. 00:31:04.707 ======================================================== 00:31:04.707 Latency(us) 00:31:04.707 Device Information : IOPS MiB/s Average min max 00:31:04.707 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 71.10 8.89 14070.39 5039.14 51876.93 00:31:04.707 ======================================================== 00:31:04.707 Total : 71.10 8.89 14070.39 5039.14 51876.93 00:31:04.707 00:31:04.707 05:31:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:04.707 05:31:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:04.707 05:31:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:14.689 Initializing NVMe Controllers 00:31:14.689 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:14.689 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:14.689 Initialization complete. Launching workers. 00:31:14.689 ======================================================== 00:31:14.689 Latency(us) 00:31:14.689 Device Information : IOPS MiB/s Average min max 00:31:14.689 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8586.68 4.19 3726.10 226.85 10162.42 00:31:14.689 ======================================================== 00:31:14.689 Total : 8586.68 4.19 3726.10 226.85 10162.42 00:31:14.689 00:31:14.689 05:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:14.689 05:31:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:24.672 Initializing NVMe Controllers 00:31:24.672 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:24.672 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:24.672 Initialization complete. Launching workers. 00:31:24.672 ======================================================== 00:31:24.672 Latency(us) 00:31:24.672 Device Information : IOPS MiB/s Average min max 00:31:24.672 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 4431.39 553.92 7221.59 561.14 16349.50 00:31:24.672 ======================================================== 00:31:24.672 Total : 4431.39 553.92 7221.59 561.14 16349.50 00:31:24.672 00:31:24.672 05:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@97 -- # for qd in "${qd_depth[@]}" 00:31:24.672 05:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:24.672 05:31:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 512 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:34.652 Initializing NVMe Controllers 00:31:34.652 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:34.652 Controller IO queue size 128, less than required. 00:31:34.652 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:34.652 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:34.652 Initialization complete. Launching workers. 00:31:34.652 ======================================================== 00:31:34.652 Latency(us) 00:31:34.652 Device Information : IOPS MiB/s Average min max 00:31:34.652 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15863.23 7.75 8073.70 1326.82 22629.88 00:31:34.652 ======================================================== 00:31:34.652 Total : 15863.23 7.75 8073.70 1326.82 22629.88 00:31:34.652 00:31:34.652 05:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@98 -- # for o in "${io_size[@]}" 00:31:34.652 05:31:47 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 131072 -w randrw -M 50 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:31:46.858 Initializing NVMe Controllers 00:31:46.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:46.858 Controller IO queue size 128, less than required. 00:31:46.858 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:31:46.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:46.858 Initialization complete. Launching workers. 00:31:46.858 ======================================================== 00:31:46.858 Latency(us) 00:31:46.858 Device Information : IOPS MiB/s Average min max 00:31:46.858 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1214.26 151.78 106150.76 16205.04 215342.96 00:31:46.858 ======================================================== 00:31:46.858 Total : 1214.26 151.78 106150.76 16205.04 215342.96 00:31:46.858 00:31:46.858 05:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:46.858 05:31:58 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@105 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 965a526b-008c-4a7f-9e6f-abc28fa20807 00:31:46.858 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:31:46.858 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1a1bf300-22fc-4ff2-8174-07cd1155a20d 00:31:46.858 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:31:46.858 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@516 -- # nvmfcleanup 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # sync 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set +e 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # for i in {1..20} 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:31:46.859 rmmod nvme_tcp 00:31:46.859 rmmod nvme_fabrics 00:31:46.859 rmmod nvme_keyring 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@128 -- # set -e 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # return 0 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@517 -- # '[' -n 453729 ']' 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@518 -- # killprocess 453729 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 453729 ']' 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 453729 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:46.859 05:31:59 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 453729 00:31:46.859 05:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:46.859 05:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:46.859 05:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 453729' 00:31:46.859 killing process with pid 453729 00:31:46.859 05:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 453729 00:31:46.859 05:32:00 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 453729 00:31:48.235 05:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:31:48.235 05:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:31:48.235 05:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:31:48.235 05:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # iptr 00:31:48.235 05:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-save 00:31:48.235 05:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:31:48.235 05:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@791 -- # iptables-restore 00:31:48.235 05:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:31:48.235 05:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:31:48.235 05:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:48.235 05:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:48.235 05:32:01 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:31:50.136 00:31:50.136 real 1m33.504s 00:31:50.136 user 5m33.110s 00:31:50.136 sys 0m17.340s 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:31:50.136 ************************************ 00:31:50.136 END TEST nvmf_perf 00:31:50.136 ************************************ 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:31:50.136 ************************************ 00:31:50.136 START TEST nvmf_fio_host 00:31:50.136 ************************************ 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:31:50.136 * Looking for test storage... 00:31:50.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lcov --version 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:50.136 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:31:50.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.396 --rc genhtml_branch_coverage=1 00:31:50.396 --rc genhtml_function_coverage=1 00:31:50.396 --rc genhtml_legend=1 00:31:50.396 --rc geninfo_all_blocks=1 00:31:50.396 --rc geninfo_unexecuted_blocks=1 00:31:50.396 00:31:50.396 ' 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:31:50.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.396 --rc genhtml_branch_coverage=1 00:31:50.396 --rc genhtml_function_coverage=1 00:31:50.396 --rc genhtml_legend=1 00:31:50.396 --rc geninfo_all_blocks=1 00:31:50.396 --rc geninfo_unexecuted_blocks=1 00:31:50.396 00:31:50.396 ' 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:31:50.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.396 --rc genhtml_branch_coverage=1 00:31:50.396 --rc genhtml_function_coverage=1 00:31:50.396 --rc genhtml_legend=1 00:31:50.396 --rc geninfo_all_blocks=1 00:31:50.396 --rc geninfo_unexecuted_blocks=1 00:31:50.396 00:31:50.396 ' 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:31:50.396 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:50.396 --rc genhtml_branch_coverage=1 00:31:50.396 --rc genhtml_function_coverage=1 00:31:50.396 --rc genhtml_legend=1 00:31:50.396 --rc geninfo_all_blocks=1 00:31:50.396 --rc geninfo_unexecuted_blocks=1 00:31:50.396 00:31:50.396 ' 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # : 0 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:31:50.396 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # xtrace_disable 00:31:50.396 05:32:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.961 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:56.961 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # pci_devs=() 00:31:56.961 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:31:56.961 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:31:56.961 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:31:56.961 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:31:56.961 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:31:56.961 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # net_devs=() 00:31:56.961 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # e810=() 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # local -ga e810 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # x722=() 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # local -ga x722 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # mlx=() 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@322 -- # local -ga mlx 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:31:56.962 Found 0000:af:00.0 (0x8086 - 0x159b) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:31:56.962 Found 0000:af:00.1 (0x8086 - 0x159b) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:31:56.962 Found net devices under 0000:af:00.0: cvl_0_0 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:31:56.962 Found net devices under 0000:af:00.1: cvl_0_1 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@442 -- # is_hw=yes 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:31:56.962 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:56.962 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:31:56.962 00:31:56.962 --- 10.0.0.2 ping statistics --- 00:31:56.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.962 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:31:56.962 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:56.962 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:31:56.962 00:31:56.962 --- 10.0.0.1 ping statistics --- 00:31:56.962 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:56.962 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # return 0 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=470610 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 470610 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 470610 ']' 00:31:56.962 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:56.963 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:56.963 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:56.963 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:56.963 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:56.963 05:32:09 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.963 [2024-12-15 05:32:09.829382] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:31:56.963 [2024-12-15 05:32:09.829423] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:56.963 [2024-12-15 05:32:09.904410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:56.963 [2024-12-15 05:32:09.927233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:56.963 [2024-12-15 05:32:09.927269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:56.963 [2024-12-15 05:32:09.927276] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:56.963 [2024-12-15 05:32:09.927282] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:56.963 [2024-12-15 05:32:09.927287] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:56.963 [2024-12-15 05:32:09.928691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:31:56.963 [2024-12-15 05:32:09.928804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:31:56.963 [2024-12-15 05:32:09.928912] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:56.963 [2024-12-15 05:32:09.928913] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:31:56.963 05:32:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:56.963 05:32:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:31:56.963 05:32:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:56.963 [2024-12-15 05:32:10.189816] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:56.963 05:32:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:31:56.963 05:32:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:56.963 05:32:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:31:56.963 05:32:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:31:56.963 Malloc1 00:31:56.963 05:32:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:57.221 05:32:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:57.221 05:32:10 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:57.479 [2024-12-15 05:32:11.074844] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:57.479 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:31:57.738 05:32:11 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:31:57.997 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:31:57.997 fio-3.35 00:31:57.997 Starting 1 thread 00:32:00.545 00:32:00.545 test: (groupid=0, jobs=1): err= 0: pid=470984: Sun Dec 15 05:32:13 2024 00:32:00.545 read: IOPS=11.9k, BW=46.7MiB/s (48.9MB/s)(93.5MiB/2005msec) 00:32:00.545 slat (nsec): min=1519, max=250597, avg=1728.08, stdev=2277.91 00:32:00.545 clat (usec): min=3185, max=9907, avg=5916.77, stdev=450.32 00:32:00.545 lat (usec): min=3224, max=9909, avg=5918.50, stdev=450.28 00:32:00.545 clat percentiles (usec): 00:32:00.545 | 1.00th=[ 4817], 5.00th=[ 5211], 10.00th=[ 5342], 20.00th=[ 5538], 00:32:00.545 | 30.00th=[ 5735], 40.00th=[ 5800], 50.00th=[ 5932], 60.00th=[ 6063], 00:32:00.545 | 70.00th=[ 6128], 80.00th=[ 6259], 90.00th=[ 6456], 95.00th=[ 6587], 00:32:00.545 | 99.00th=[ 6915], 99.50th=[ 6980], 99.90th=[ 7898], 99.95th=[ 9241], 00:32:00.545 | 99.99th=[ 9896] 00:32:00.545 bw ( KiB/s): min=46752, max=48544, per=99.94%, avg=47748.00, stdev=745.49, samples=4 00:32:00.545 iops : min=11688, max=12136, avg=11937.00, stdev=186.37, samples=4 00:32:00.545 write: IOPS=11.9k, BW=46.4MiB/s (48.7MB/s)(93.1MiB/2005msec); 0 zone resets 00:32:00.545 slat (nsec): min=1560, max=231594, avg=1806.74, stdev=1676.26 00:32:00.545 clat (usec): min=2478, max=9093, avg=4785.41, stdev=376.70 00:32:00.545 lat (usec): min=2493, max=9095, avg=4787.21, stdev=376.76 00:32:00.545 clat percentiles (usec): 00:32:00.545 | 1.00th=[ 3916], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4490], 00:32:00.545 | 30.00th=[ 4621], 40.00th=[ 4686], 50.00th=[ 4752], 60.00th=[ 4883], 00:32:00.545 | 70.00th=[ 4948], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:32:00.545 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7111], 99.95th=[ 8455], 00:32:00.545 | 99.99th=[ 9110] 00:32:00.545 bw ( KiB/s): min=47296, max=48000, per=100.00%, avg=47568.00, stdev=301.89, samples=4 00:32:00.545 iops : min=11824, max=12000, avg=11892.00, stdev=75.47, samples=4 00:32:00.545 lat (msec) : 4=0.83%, 10=99.17% 00:32:00.545 cpu : usr=74.65%, sys=24.20%, ctx=92, majf=0, minf=3 00:32:00.545 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:32:00.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:00.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:00.545 issued rwts: total=23948,23837,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:00.545 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:00.545 00:32:00.545 Run status group 0 (all jobs): 00:32:00.545 READ: bw=46.7MiB/s (48.9MB/s), 46.7MiB/s-46.7MiB/s (48.9MB/s-48.9MB/s), io=93.5MiB (98.1MB), run=2005-2005msec 00:32:00.545 WRITE: bw=46.4MiB/s (48.7MB/s), 46.4MiB/s-46.4MiB/s (48.7MB/s-48.7MB/s), io=93.1MiB (97.6MB), run=2005-2005msec 00:32:00.545 05:32:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:00.545 05:32:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:00.545 05:32:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:00.545 05:32:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:00.545 05:32:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:00.545 05:32:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:00.545 05:32:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:00.545 05:32:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:00.545 05:32:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.545 05:32:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:00.545 05:32:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:00.545 05:32:13 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:00.545 05:32:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:00.545 05:32:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:00.545 05:32:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:00.545 05:32:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:00.545 05:32:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:00.545 05:32:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:00.545 05:32:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:00.545 05:32:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:00.545 05:32:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:00.545 05:32:14 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:32:00.809 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:32:00.809 fio-3.35 00:32:00.809 Starting 1 thread 00:32:03.340 00:32:03.340 test: (groupid=0, jobs=1): err= 0: pid=471543: Sun Dec 15 05:32:16 2024 00:32:03.340 read: IOPS=10.8k, BW=168MiB/s (177MB/s)(338MiB/2006msec) 00:32:03.340 slat (nsec): min=2470, max=86381, avg=2835.26, stdev=1256.91 00:32:03.340 clat (usec): min=1003, max=50181, avg=6961.50, stdev=3365.35 00:32:03.340 lat (usec): min=1007, max=50183, avg=6964.34, stdev=3365.38 00:32:03.340 clat percentiles (usec): 00:32:03.340 | 1.00th=[ 3752], 5.00th=[ 4293], 10.00th=[ 4752], 20.00th=[ 5342], 00:32:03.340 | 30.00th=[ 5800], 40.00th=[ 6325], 50.00th=[ 6849], 60.00th=[ 7242], 00:32:03.340 | 70.00th=[ 7570], 80.00th=[ 7963], 90.00th=[ 8586], 95.00th=[ 9241], 00:32:03.340 | 99.00th=[10945], 99.50th=[43779], 99.90th=[48497], 99.95th=[49021], 00:32:03.340 | 99.99th=[50070] 00:32:03.340 bw ( KiB/s): min=75424, max=97280, per=49.99%, avg=86240.00, stdev=9024.49, samples=4 00:32:03.340 iops : min= 4714, max= 6080, avg=5390.00, stdev=564.03, samples=4 00:32:03.340 write: IOPS=6446, BW=101MiB/s (106MB/s)(177MiB/1753msec); 0 zone resets 00:32:03.340 slat (usec): min=29, max=324, avg=31.92, stdev= 6.51 00:32:03.340 clat (usec): min=5150, max=15667, avg=8556.28, stdev=1505.25 00:32:03.340 lat (usec): min=5184, max=15696, avg=8588.21, stdev=1506.03 00:32:03.340 clat percentiles (usec): 00:32:03.340 | 1.00th=[ 5735], 5.00th=[ 6456], 10.00th=[ 6783], 20.00th=[ 7242], 00:32:03.340 | 30.00th=[ 7635], 40.00th=[ 8029], 50.00th=[ 8356], 60.00th=[ 8848], 00:32:03.340 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10683], 95.00th=[11338], 00:32:03.340 | 99.00th=[12256], 99.50th=[12911], 99.90th=[14091], 99.95th=[14222], 00:32:03.340 | 99.99th=[14353] 00:32:03.340 bw ( KiB/s): min=78080, max=101376, per=87.15%, avg=89880.00, stdev=9630.08, samples=4 00:32:03.340 iops : min= 4880, max= 6336, avg=5617.50, stdev=601.88, samples=4 00:32:03.340 lat (msec) : 2=0.01%, 4=1.60%, 10=90.74%, 20=7.26%, 50=0.38% 00:32:03.340 lat (msec) : 100=0.01% 00:32:03.340 cpu : usr=86.08%, sys=13.22%, ctx=43, majf=0, minf=3 00:32:03.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:32:03.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:03.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:03.340 issued rwts: total=21630,11300,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:03.340 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:03.340 00:32:03.340 Run status group 0 (all jobs): 00:32:03.340 READ: bw=168MiB/s (177MB/s), 168MiB/s-168MiB/s (177MB/s-177MB/s), io=338MiB (354MB), run=2006-2006msec 00:32:03.340 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=177MiB (185MB), run=1753-1753msec 00:32:03.340 05:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:03.340 05:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 1 -eq 1 ']' 00:32:03.340 05:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # bdfs=($(get_nvme_bdfs)) 00:32:03.340 05:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@51 -- # get_nvme_bdfs 00:32:03.340 05:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:03.340 05:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1498 -- # local bdfs 00:32:03.340 05:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:03.340 05:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:03.340 05:32:16 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:03.598 05:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:03.598 05:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:03.598 05:32:17 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 -i 10.0.0.2 00:32:06.882 Nvme0n1 00:32:06.882 05:32:20 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore -c 1073741824 Nvme0n1 lvs_0 00:32:09.411 05:32:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@53 -- # ls_guid=77fa9631-c61e-449a-bd66-357dd59cfc58 00:32:09.411 05:32:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@54 -- # get_lvs_free_mb 77fa9631-c61e-449a-bd66-357dd59cfc58 00:32:09.411 05:32:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=77fa9631-c61e-449a-bd66-357dd59cfc58 00:32:09.411 05:32:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:09.411 05:32:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:09.411 05:32:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:09.411 05:32:22 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:09.669 05:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:09.669 { 00:32:09.669 "uuid": "77fa9631-c61e-449a-bd66-357dd59cfc58", 00:32:09.669 "name": "lvs_0", 00:32:09.669 "base_bdev": "Nvme0n1", 00:32:09.669 "total_data_clusters": 930, 00:32:09.669 "free_clusters": 930, 00:32:09.669 "block_size": 512, 00:32:09.669 "cluster_size": 1073741824 00:32:09.669 } 00:32:09.669 ]' 00:32:09.669 05:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="77fa9631-c61e-449a-bd66-357dd59cfc58") .free_clusters' 00:32:09.669 05:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=930 00:32:09.669 05:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="77fa9631-c61e-449a-bd66-357dd59cfc58") .cluster_size' 00:32:09.669 05:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=1073741824 00:32:09.669 05:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=952320 00:32:09.669 05:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 952320 00:32:09.669 952320 00:32:09.669 05:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_0 lbd_0 952320 00:32:09.928 aebc0feb-7401-4ad7-b751-f3221fed0ad9 00:32:09.928 05:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000001 00:32:10.186 05:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 lvs_0/lbd_0 00:32:10.445 05:32:23 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:32:10.445 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@59 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:10.445 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:10.726 05:32:24 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:10.984 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:10.984 fio-3.35 00:32:10.984 Starting 1 thread 00:32:13.513 00:32:13.513 test: (groupid=0, jobs=1): err= 0: pid=473338: Sun Dec 15 05:32:26 2024 00:32:13.513 read: IOPS=8079, BW=31.6MiB/s (33.1MB/s)(63.3MiB/2006msec) 00:32:13.513 slat (nsec): min=1523, max=98981, avg=1658.25, stdev=1069.47 00:32:13.513 clat (usec): min=826, max=169876, avg=8689.74, stdev=10254.54 00:32:13.513 lat (usec): min=828, max=169896, avg=8691.40, stdev=10254.72 00:32:13.513 clat percentiles (msec): 00:32:13.513 | 1.00th=[ 7], 5.00th=[ 7], 10.00th=[ 8], 20.00th=[ 8], 00:32:13.513 | 30.00th=[ 8], 40.00th=[ 8], 50.00th=[ 9], 60.00th=[ 9], 00:32:13.513 | 70.00th=[ 9], 80.00th=[ 9], 90.00th=[ 9], 95.00th=[ 10], 00:32:13.513 | 99.00th=[ 10], 99.50th=[ 13], 99.90th=[ 169], 99.95th=[ 169], 00:32:13.513 | 99.99th=[ 169] 00:32:13.513 bw ( KiB/s): min=22824, max=35528, per=99.93%, avg=32296.00, stdev=6315.55, samples=4 00:32:13.513 iops : min= 5706, max= 8882, avg=8074.00, stdev=1578.89, samples=4 00:32:13.513 write: IOPS=8072, BW=31.5MiB/s (33.1MB/s)(63.3MiB/2006msec); 0 zone resets 00:32:13.513 slat (nsec): min=1566, max=88143, avg=1720.09, stdev=796.66 00:32:13.513 clat (usec): min=212, max=168454, avg=7030.21, stdev=9579.97 00:32:13.513 lat (usec): min=213, max=168459, avg=7031.93, stdev=9580.17 00:32:13.513 clat percentiles (msec): 00:32:13.513 | 1.00th=[ 6], 5.00th=[ 6], 10.00th=[ 6], 20.00th=[ 6], 00:32:13.513 | 30.00th=[ 7], 40.00th=[ 7], 50.00th=[ 7], 60.00th=[ 7], 00:32:13.513 | 70.00th=[ 7], 80.00th=[ 7], 90.00th=[ 8], 95.00th=[ 8], 00:32:13.513 | 99.00th=[ 8], 99.50th=[ 11], 99.90th=[ 169], 99.95th=[ 169], 00:32:13.513 | 99.99th=[ 169] 00:32:13.513 bw ( KiB/s): min=23720, max=35200, per=99.88%, avg=32250.00, stdev=5687.23, samples=4 00:32:13.513 iops : min= 5930, max= 8800, avg=8062.50, stdev=1421.81, samples=4 00:32:13.513 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:32:13.513 lat (msec) : 2=0.04%, 4=0.24%, 10=99.11%, 20=0.19%, 250=0.40% 00:32:13.513 cpu : usr=71.92%, sys=27.23%, ctx=192, majf=0, minf=3 00:32:13.513 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:13.513 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:13.513 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:13.513 issued rwts: total=16207,16193,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:13.513 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:13.513 00:32:13.513 Run status group 0 (all jobs): 00:32:13.513 READ: bw=31.6MiB/s (33.1MB/s), 31.6MiB/s-31.6MiB/s (33.1MB/s-33.1MB/s), io=63.3MiB (66.4MB), run=2006-2006msec 00:32:13.513 WRITE: bw=31.5MiB/s (33.1MB/s), 31.5MiB/s-31.5MiB/s (33.1MB/s-33.1MB/s), io=63.3MiB (66.3MB), run=2006-2006msec 00:32:13.513 05:32:26 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:32:13.513 05:32:27 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --clear-method none lvs_0/lbd_0 lvs_n_0 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@64 -- # ls_nested_guid=31e9e8fc-9bf1-4d39-a733-3148efd95a2d 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@65 -- # get_lvs_free_mb 31e9e8fc-9bf1-4d39-a733-3148efd95a2d 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1368 -- # local lvs_uuid=31e9e8fc-9bf1-4d39-a733-3148efd95a2d 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1369 -- # local lvs_info 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1370 -- # local fc 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1371 -- # local cs 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1372 -- # lvs_info='[ 00:32:14.881 { 00:32:14.881 "uuid": "77fa9631-c61e-449a-bd66-357dd59cfc58", 00:32:14.881 "name": "lvs_0", 00:32:14.881 "base_bdev": "Nvme0n1", 00:32:14.881 "total_data_clusters": 930, 00:32:14.881 "free_clusters": 0, 00:32:14.881 "block_size": 512, 00:32:14.881 "cluster_size": 1073741824 00:32:14.881 }, 00:32:14.881 { 00:32:14.881 "uuid": "31e9e8fc-9bf1-4d39-a733-3148efd95a2d", 00:32:14.881 "name": "lvs_n_0", 00:32:14.881 "base_bdev": "aebc0feb-7401-4ad7-b751-f3221fed0ad9", 00:32:14.881 "total_data_clusters": 237847, 00:32:14.881 "free_clusters": 237847, 00:32:14.881 "block_size": 512, 00:32:14.881 "cluster_size": 4194304 00:32:14.881 } 00:32:14.881 ]' 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # jq '.[] | select(.uuid=="31e9e8fc-9bf1-4d39-a733-3148efd95a2d") .free_clusters' 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1373 -- # fc=237847 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # jq '.[] | select(.uuid=="31e9e8fc-9bf1-4d39-a733-3148efd95a2d") .cluster_size' 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1374 -- # cs=4194304 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1377 -- # free_mb=951388 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1378 -- # echo 951388 00:32:14.881 951388 00:32:14.881 05:32:28 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -l lvs_n_0 lbd_nest_0 951388 00:32:15.444 bf9ad6e7-6c64-4a56-9149-8ab14ba2cfda 00:32:15.444 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000001 00:32:15.701 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 lvs_n_0/lbd_nest_0 00:32:15.957 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@70 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:32:16.232 05:32:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:32:16.498 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:32:16.498 fio-3.35 00:32:16.498 Starting 1 thread 00:32:19.026 00:32:19.026 test: (groupid=0, jobs=1): err= 0: pid=474291: Sun Dec 15 05:32:32 2024 00:32:19.026 read: IOPS=7894, BW=30.8MiB/s (32.3MB/s)(61.9MiB/2006msec) 00:32:19.026 slat (nsec): min=1521, max=102629, avg=1662.72, stdev=1136.44 00:32:19.026 clat (usec): min=3090, max=14992, avg=8947.73, stdev=792.14 00:32:19.026 lat (usec): min=3094, max=14993, avg=8949.39, stdev=792.09 00:32:19.026 clat percentiles (usec): 00:32:19.026 | 1.00th=[ 7046], 5.00th=[ 7701], 10.00th=[ 7963], 20.00th=[ 8291], 00:32:19.026 | 30.00th=[ 8586], 40.00th=[ 8717], 50.00th=[ 8979], 60.00th=[ 9110], 00:32:19.026 | 70.00th=[ 9372], 80.00th=[ 9634], 90.00th=[ 9896], 95.00th=[10159], 00:32:19.026 | 99.00th=[10683], 99.50th=[10814], 99.90th=[13435], 99.95th=[14353], 00:32:19.026 | 99.99th=[15008] 00:32:19.026 bw ( KiB/s): min=30616, max=31936, per=99.84%, avg=31528.00, stdev=619.71, samples=4 00:32:19.026 iops : min= 7654, max= 7984, avg=7882.00, stdev=154.93, samples=4 00:32:19.026 write: IOPS=7868, BW=30.7MiB/s (32.2MB/s)(61.7MiB/2006msec); 0 zone resets 00:32:19.026 slat (nsec): min=1566, max=80785, avg=1736.58, stdev=760.76 00:32:19.026 clat (usec): min=1605, max=13346, avg=7231.60, stdev=652.46 00:32:19.026 lat (usec): min=1610, max=13347, avg=7233.33, stdev=652.44 00:32:19.026 clat percentiles (usec): 00:32:19.026 | 1.00th=[ 5669], 5.00th=[ 6259], 10.00th=[ 6456], 20.00th=[ 6718], 00:32:19.026 | 30.00th=[ 6915], 40.00th=[ 7111], 50.00th=[ 7242], 60.00th=[ 7373], 00:32:19.026 | 70.00th=[ 7570], 80.00th=[ 7767], 90.00th=[ 7963], 95.00th=[ 8225], 00:32:19.026 | 99.00th=[ 8586], 99.50th=[ 8717], 99.90th=[11469], 99.95th=[13173], 00:32:19.026 | 99.99th=[13304] 00:32:19.026 bw ( KiB/s): min=31360, max=31568, per=99.96%, avg=31460.00, stdev=88.96, samples=4 00:32:19.026 iops : min= 7840, max= 7892, avg=7865.00, stdev=22.24, samples=4 00:32:19.026 lat (msec) : 2=0.01%, 4=0.08%, 10=95.93%, 20=3.98% 00:32:19.026 cpu : usr=70.17%, sys=28.53%, ctx=198, majf=0, minf=3 00:32:19.026 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:32:19.026 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:32:19.026 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:32:19.026 issued rwts: total=15837,15784,0,0 short=0,0,0,0 dropped=0,0,0,0 00:32:19.026 latency : target=0, window=0, percentile=100.00%, depth=128 00:32:19.026 00:32:19.026 Run status group 0 (all jobs): 00:32:19.026 READ: bw=30.8MiB/s (32.3MB/s), 30.8MiB/s-30.8MiB/s (32.3MB/s-32.3MB/s), io=61.9MiB (64.9MB), run=2006-2006msec 00:32:19.026 WRITE: bw=30.7MiB/s (32.2MB/s), 30.7MiB/s-30.7MiB/s (32.2MB/s-32.2MB/s), io=61.7MiB (64.7MB), run=2006-2006msec 00:32:19.026 05:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:32:19.026 05:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@74 -- # sync 00:32:19.026 05:32:32 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -t 120 bdev_lvol_delete lvs_n_0/lbd_nest_0 00:32:23.205 05:32:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_n_0 00:32:23.205 05:32:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete lvs_0/lbd_0 00:32:26.482 05:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -l lvs_0 00:32:26.482 05:32:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # sync 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set +e 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:32:27.853 rmmod nvme_tcp 00:32:27.853 rmmod nvme_fabrics 00:32:27.853 rmmod nvme_keyring 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@128 -- # set -e 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # return 0 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@517 -- # '[' -n 470610 ']' 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@518 -- # killprocess 470610 00:32:27.853 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 470610 ']' 00:32:27.854 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 470610 00:32:27.854 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 470610 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 470610' 00:32:28.112 killing process with pid 470610 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 470610 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 470610 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # iptr 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-save 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@791 -- # iptables-restore 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:28.112 05:32:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.647 05:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:32:30.647 00:32:30.647 real 0m40.193s 00:32:30.647 user 2m41.363s 00:32:30.647 sys 0m8.915s 00:32:30.647 05:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:30.647 05:32:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.647 ************************************ 00:32:30.647 END TEST nvmf_fio_host 00:32:30.647 ************************************ 00:32:30.647 05:32:43 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:30.647 05:32:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:30.647 05:32:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:30.647 05:32:43 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:32:30.647 ************************************ 00:32:30.647 START TEST nvmf_failover 00:32:30.647 ************************************ 00:32:30.647 05:32:43 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:32:30.647 * Looking for test storage... 00:32:30.647 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lcov --version 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:32:30.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.647 --rc genhtml_branch_coverage=1 00:32:30.647 --rc genhtml_function_coverage=1 00:32:30.647 --rc genhtml_legend=1 00:32:30.647 --rc geninfo_all_blocks=1 00:32:30.647 --rc geninfo_unexecuted_blocks=1 00:32:30.647 00:32:30.647 ' 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:32:30.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.647 --rc genhtml_branch_coverage=1 00:32:30.647 --rc genhtml_function_coverage=1 00:32:30.647 --rc genhtml_legend=1 00:32:30.647 --rc geninfo_all_blocks=1 00:32:30.647 --rc geninfo_unexecuted_blocks=1 00:32:30.647 00:32:30.647 ' 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:32:30.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.647 --rc genhtml_branch_coverage=1 00:32:30.647 --rc genhtml_function_coverage=1 00:32:30.647 --rc genhtml_legend=1 00:32:30.647 --rc geninfo_all_blocks=1 00:32:30.647 --rc geninfo_unexecuted_blocks=1 00:32:30.647 00:32:30.647 ' 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:32:30.647 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:30.647 --rc genhtml_branch_coverage=1 00:32:30.647 --rc genhtml_function_coverage=1 00:32:30.647 --rc genhtml_legend=1 00:32:30.647 --rc geninfo_all_blocks=1 00:32:30.647 --rc geninfo_unexecuted_blocks=1 00:32:30.647 00:32:30.647 ' 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # : 0 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:32:30.647 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@55 -- # have_pci_nics=0 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@476 -- # prepare_net_devs 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@438 -- # local -g is_hw=no 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@440 -- # remove_spdk_ns 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # xtrace_disable 00:32:30.647 05:32:44 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:37.218 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # pci_devs=() 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # local -a pci_devs 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # pci_net_devs=() 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # pci_drivers=() 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # local -A pci_drivers 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # net_devs=() 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@319 -- # local -ga net_devs 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # e810=() 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # local -ga e810 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # x722=() 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # local -ga x722 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # mlx=() 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@322 -- # local -ga mlx 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:32:37.219 Found 0000:af:00.0 (0x8086 - 0x159b) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:32:37.219 Found 0000:af:00.1 (0x8086 - 0x159b) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:32:37.219 Found net devices under 0000:af:00.0: cvl_0_0 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # [[ up == up ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:32:37.219 Found net devices under 0000:af:00.1: cvl_0_1 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@442 -- # is_hw=yes 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:32:37.219 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:37.219 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:32:37.219 00:32:37.219 --- 10.0.0.2 ping statistics --- 00:32:37.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.219 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:32:37.219 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:37.219 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.206 ms 00:32:37.219 00:32:37.219 --- 10.0.0.1 ping statistics --- 00:32:37.219 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:37.219 rtt min/avg/max/mdev = 0.206/0.206/0.206/0.000 ms 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # return 0 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:32:37.219 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:32:37.220 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:37.220 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:37.220 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@509 -- # nvmfpid=479506 00:32:37.220 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:32:37.220 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@510 -- # waitforlisten 479506 00:32:37.220 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 479506 ']' 00:32:37.220 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:37.220 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.220 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:37.220 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:37.220 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.220 05:32:49 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:37.220 [2024-12-15 05:32:50.047777] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:37.220 [2024-12-15 05:32:50.047832] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:37.220 [2024-12-15 05:32:50.125189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:37.220 [2024-12-15 05:32:50.146732] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:37.220 [2024-12-15 05:32:50.146771] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:37.220 [2024-12-15 05:32:50.146779] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:37.220 [2024-12-15 05:32:50.146785] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:37.220 [2024-12-15 05:32:50.146793] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:37.220 [2024-12-15 05:32:50.147974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:32:37.220 [2024-12-15 05:32:50.148072] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:32:37.220 [2024-12-15 05:32:50.148074] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:32:37.220 05:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:37.220 05:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:37.220 05:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:32:37.220 05:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:37.220 05:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:37.220 05:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:37.220 05:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:32:37.220 [2024-12-15 05:32:50.468160] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:37.220 05:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:32:37.220 Malloc0 00:32:37.220 05:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:37.477 05:32:50 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:37.477 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:37.735 [2024-12-15 05:32:51.261961] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:37.735 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:37.993 [2024-12-15 05:32:51.446433] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:37.993 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:37.993 [2024-12-15 05:32:51.639086] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:37.993 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=479789 00:32:37.993 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:32:37.993 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:37.993 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 479789 /var/tmp/bdevperf.sock 00:32:37.993 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 479789 ']' 00:32:37.993 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:37.993 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:37.993 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:37.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:37.993 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:37.993 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:38.250 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:38.250 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:38.250 05:32:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:38.508 NVMe0n1 00:32:38.508 05:32:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:39.073 00:32:39.073 05:32:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:39.073 05:32:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=479974 00:32:39.073 05:32:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:32:40.005 05:32:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:40.005 [2024-12-15 05:32:53.671017] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671070] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671085] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671091] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671098] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671103] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671109] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671115] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671121] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671127] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671132] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671138] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671144] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671150] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671156] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671162] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671173] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671179] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671185] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671190] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671196] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671202] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671207] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671214] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671220] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671226] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671232] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671238] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671244] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671249] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671255] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671260] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671266] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671273] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671279] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671284] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671290] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671296] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671301] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671307] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671313] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671319] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671324] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671334] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671340] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.005 [2024-12-15 05:32:53.671346] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cdaa0 is same with the state(6) to be set 00:32:40.262 05:32:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:32:43.554 05:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:43.554 00:32:43.554 05:32:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:43.554 [2024-12-15 05:32:57.180512] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.554 [2024-12-15 05:32:57.180557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.554 [2024-12-15 05:32:57.180564] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.554 [2024-12-15 05:32:57.180571] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.554 [2024-12-15 05:32:57.180577] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.554 [2024-12-15 05:32:57.180583] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.554 [2024-12-15 05:32:57.180589] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.554 [2024-12-15 05:32:57.180594] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180600] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180606] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180612] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180618] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180623] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180629] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180636] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180641] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180647] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180653] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180659] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180665] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180676] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180682] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180688] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180694] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180699] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180706] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180711] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180717] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180723] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180729] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180734] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180740] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180745] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180752] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180759] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180765] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180771] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180777] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180782] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180788] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180794] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180799] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180805] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180811] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180816] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180822] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180827] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180833] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180843] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180849] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180854] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180860] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180866] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180872] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180877] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180883] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180889] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180896] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180902] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180908] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180914] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180920] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180926] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180932] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180938] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180944] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180950] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180956] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180967] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180973] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180979] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180984] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.180990] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181009] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181015] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181043] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181049] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181055] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181061] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181067] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181072] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181078] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181084] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181090] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.555 [2024-12-15 05:32:57.181096] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181102] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181107] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181113] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181119] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181124] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181130] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181136] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181142] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181147] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181153] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181158] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181166] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181172] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181178] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181183] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181189] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181194] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 [2024-12-15 05:32:57.181200] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cefe0 is same with the state(6) to be set 00:32:43.556 05:32:57 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:32:46.843 05:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:46.843 [2024-12-15 05:33:00.409603] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:46.843 05:33:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:32:47.778 05:33:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:48.037 [2024-12-15 05:33:01.651397] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651440] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651448] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651454] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651460] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651466] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651472] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651478] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651487] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651493] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651499] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651504] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651510] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651516] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651522] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651533] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651539] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651545] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651551] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651557] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651563] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651568] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651574] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651581] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651587] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651593] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651599] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651605] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651611] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651616] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651622] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 [2024-12-15 05:33:01.651628] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22cfea0 is same with the state(6) to be set 00:32:48.037 05:33:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 479974 00:32:54.610 { 00:32:54.610 "results": [ 00:32:54.610 { 00:32:54.610 "job": "NVMe0n1", 00:32:54.610 "core_mask": "0x1", 00:32:54.610 "workload": "verify", 00:32:54.610 "status": "finished", 00:32:54.610 "verify_range": { 00:32:54.610 "start": 0, 00:32:54.610 "length": 16384 00:32:54.610 }, 00:32:54.610 "queue_depth": 128, 00:32:54.610 "io_size": 4096, 00:32:54.610 "runtime": 15.001903, 00:32:54.610 "iops": 11223.50944410186, 00:32:54.610 "mibps": 43.84183376602289, 00:32:54.610 "io_failed": 8133, 00:32:54.610 "io_timeout": 0, 00:32:54.610 "avg_latency_us": 10857.31526611517, 00:32:54.610 "min_latency_us": 415.45142857142855, 00:32:54.610 "max_latency_us": 16227.961904761905 00:32:54.610 } 00:32:54.610 ], 00:32:54.610 "core_count": 1 00:32:54.610 } 00:32:54.610 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 479789 00:32:54.610 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 479789 ']' 00:32:54.610 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 479789 00:32:54.610 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:32:54.610 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:54.610 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479789 00:32:54.610 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:54.610 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:54.610 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479789' 00:32:54.610 killing process with pid 479789 00:32:54.610 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 479789 00:32:54.610 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 479789 00:32:54.610 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:32:54.610 [2024-12-15 05:32:51.715942] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:32:54.610 [2024-12-15 05:32:51.716005] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid479789 ] 00:32:54.610 [2024-12-15 05:32:51.792610] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:54.610 [2024-12-15 05:32:51.815183] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.610 Running I/O for 15 seconds... 00:32:54.610 11379.00 IOPS, 44.45 MiB/s [2024-12-15T04:33:08.297Z] [2024-12-15 05:32:53.671521] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.610 [2024-12-15 05:32:53.671553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.610 [2024-12-15 05:32:53.671563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.610 [2024-12-15 05:32:53.671570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.610 [2024-12-15 05:32:53.671578] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.610 [2024-12-15 05:32:53.671585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.610 [2024-12-15 05:32:53.671592] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.610 [2024-12-15 05:32:53.671598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.610 [2024-12-15 05:32:53.671605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd343a0 is same with the state(6) to be set 00:32:54.610 [2024-12-15 05:32:53.671643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:101400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.610 [2024-12-15 05:32:53.671653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.610 [2024-12-15 05:32:53.671666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:101408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.610 [2024-12-15 05:32:53.671673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.610 [2024-12-15 05:32:53.671682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:101416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.610 [2024-12-15 05:32:53.671690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.610 [2024-12-15 05:32:53.671697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:101424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.610 [2024-12-15 05:32:53.671704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.610 [2024-12-15 05:32:53.671712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:101432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.610 [2024-12-15 05:32:53.671718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.610 [2024-12-15 05:32:53.671726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:101440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.610 [2024-12-15 05:32:53.671733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.610 [2024-12-15 05:32:53.671741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:101448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.610 [2024-12-15 05:32:53.671753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:101456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:101464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:101472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:101480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:101488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:101496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:101504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:101512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:101520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:101528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:101536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:101544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:101552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:101560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:101568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.671984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:101576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.671991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:101584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:101600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:101608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:101616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:101624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:101632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:101640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:101648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:101656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:101664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:101672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:101680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:101688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:101704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:101712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:101720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:101728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:101736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:101744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:101752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:101760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.611 [2024-12-15 05:32:53.672331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.611 [2024-12-15 05:32:53.672339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:101248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.611 [2024-12-15 05:32:53.672346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:101256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.612 [2024-12-15 05:32:53.672365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:101264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.612 [2024-12-15 05:32:53.672380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:101768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:101776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:101784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:101800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:101808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:101816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:101824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:101832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:101840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:101848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:101856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:101864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:101872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:101880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:101888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:101896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:101904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:101912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:101920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:101928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:101936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:101944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:101952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:101960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:101968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:101976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:101984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:101992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:102000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:102008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.612 [2024-12-15 05:32:53.672835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:101272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.612 [2024-12-15 05:32:53.672849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:101280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.612 [2024-12-15 05:32:53.672864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:101288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.612 [2024-12-15 05:32:53.672880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.612 [2024-12-15 05:32:53.672887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:101296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.613 [2024-12-15 05:32:53.672894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.672902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:101304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.613 [2024-12-15 05:32:53.672908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.672916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:101312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.613 [2024-12-15 05:32:53.672922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.672930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:101320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.613 [2024-12-15 05:32:53.672936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.672944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:101328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.613 [2024-12-15 05:32:53.672950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.672958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:101336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.613 [2024-12-15 05:32:53.672965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.672973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:101344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.613 [2024-12-15 05:32:53.672979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.672987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:101352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.613 [2024-12-15 05:32:53.672999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:101360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.613 [2024-12-15 05:32:53.673015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:101368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.613 [2024-12-15 05:32:53.673030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:101376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.613 [2024-12-15 05:32:53.673044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:101384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.613 [2024-12-15 05:32:53.673060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:101392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.613 [2024-12-15 05:32:53.673074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:102016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:102024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:102032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:102040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:102048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:102056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:102064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:102072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:102088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:102096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:102104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:102112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:102120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:102128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:102136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:102144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:102152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:102160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:102168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:102176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:102184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:102192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.613 [2024-12-15 05:32:53.673414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:102200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.613 [2024-12-15 05:32:53.673420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:53.673429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:102208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.614 [2024-12-15 05:32:53.673436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:53.673443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:102216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.614 [2024-12-15 05:32:53.673450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:53.673458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:102224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.614 [2024-12-15 05:32:53.673466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:53.673473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:102232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.614 [2024-12-15 05:32:53.673481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:53.673489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.614 [2024-12-15 05:32:53.673496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:53.673504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:102248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.614 [2024-12-15 05:32:53.673510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:53.673518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:102256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.614 [2024-12-15 05:32:53.673524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:53.673543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:54.614 [2024-12-15 05:32:53.673549] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:54.614 [2024-12-15 05:32:53.673556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:102264 len:8 PRP1 0x0 PRP2 0x0 00:32:54.614 [2024-12-15 05:32:53.673562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:53.673607] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:32:54.614 [2024-12-15 05:32:53.673616] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:32:54.614 [2024-12-15 05:32:53.676383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:54.614 [2024-12-15 05:32:53.676412] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd343a0 (9): Bad file descriptor 00:32:54.614 [2024-12-15 05:32:53.745249] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:32:54.614 11021.00 IOPS, 43.05 MiB/s [2024-12-15T04:33:08.301Z] 11064.33 IOPS, 43.22 MiB/s [2024-12-15T04:33:08.301Z] 11149.50 IOPS, 43.55 MiB/s [2024-12-15T04:33:08.301Z] [2024-12-15 05:32:57.182544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:45760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:45768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:45776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:45784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:45792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:45800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:45808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:45816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:45824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:45832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:45840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:45848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:45856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:45864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:45872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:45880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:45888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:45896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:45904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:45912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:45920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:45928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:45936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:45944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:45952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:45960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.614 [2024-12-15 05:32:57.182971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:45968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.614 [2024-12-15 05:32:57.182978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.182987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:45976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.182998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:45984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:45992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:46000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:46008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:46016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:46024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:46032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:46040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:46048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:46056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:46064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:46072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:46080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:46088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:46096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:46104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:46112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:46120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:46264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.615 [2024-12-15 05:32:57.183285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:46128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:46136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:46144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:46152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:46160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:46168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:46176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:46184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:46192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:46200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:46208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:46216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.615 [2024-12-15 05:32:57.183473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:46224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.615 [2024-12-15 05:32:57.183480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:46232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.616 [2024-12-15 05:32:57.183494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:46240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.616 [2024-12-15 05:32:57.183508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:46248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.616 [2024-12-15 05:32:57.183523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:46256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.616 [2024-12-15 05:32:57.183538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:46272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:46280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:46288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:46296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:46304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:46312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:46320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:46328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:46336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:46344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:46352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:46360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:46376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:46384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:46392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:46400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:46408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:46416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:46424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:46432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:46440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:46448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:46456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:46464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:46472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:46480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:46488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:46496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:46504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.183984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:46512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.183990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.184001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:46520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.184009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.184017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:46528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.184024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.184032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:46536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.184038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.184046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:46544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.184052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.616 [2024-12-15 05:32:57.184061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:46552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.616 [2024-12-15 05:32:57.184068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:46560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:46568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:46576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:46584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:46592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:46600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:46608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:46616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:46624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:46632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:46640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:46648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:46656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:46664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:46672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:46680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:46688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:46696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:46704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:46712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:46720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:46728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:46736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:46744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:46752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:46760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:46768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.617 [2024-12-15 05:32:57.184459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184477] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:54.617 [2024-12-15 05:32:57.184488] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:54.617 [2024-12-15 05:32:57.184495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:46776 len:8 PRP1 0x0 PRP2 0x0 00:32:54.617 [2024-12-15 05:32:57.184501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184544] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:32:54.617 [2024-12-15 05:32:57.184566] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.617 [2024-12-15 05:32:57.184573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184581] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.617 [2024-12-15 05:32:57.184588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184596] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.617 [2024-12-15 05:32:57.184602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.617 [2024-12-15 05:32:57.184616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:32:57.184624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:32:54.617 [2024-12-15 05:32:57.184646] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd343a0 (9): Bad file descriptor 00:32:54.617 [2024-12-15 05:32:57.187418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:32:54.617 [2024-12-15 05:32:57.249790] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:32:54.617 11011.40 IOPS, 43.01 MiB/s [2024-12-15T04:33:08.304Z] 11090.83 IOPS, 43.32 MiB/s [2024-12-15T04:33:08.304Z] 11140.71 IOPS, 43.52 MiB/s [2024-12-15T04:33:08.304Z] 11167.50 IOPS, 43.62 MiB/s [2024-12-15T04:33:08.304Z] 11188.33 IOPS, 43.70 MiB/s [2024-12-15T04:33:08.304Z] [2024-12-15 05:33:01.652153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:78984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.617 [2024-12-15 05:33:01.652187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:33:01.652211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:78992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.617 [2024-12-15 05:33:01.652219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:33:01.652228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:79000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.617 [2024-12-15 05:33:01.652235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:33:01.652244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:79008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.617 [2024-12-15 05:33:01.652251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.617 [2024-12-15 05:33:01.652259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:79016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:79024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:79032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:79040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:79048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:79056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:79064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:79072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:79080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:79088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:79096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:79104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:79112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:79120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:79128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:79136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:79144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:79152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:79168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:79176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.618 [2024-12-15 05:33:01.652567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:79456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:79464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:79472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:79480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:79488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:79496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:79504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:79512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:79520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:79528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:79536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:79544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:79552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:79560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:79568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:79576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:79584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:79592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:79600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.618 [2024-12-15 05:33:01.652847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.618 [2024-12-15 05:33:01.652855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:79608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.619 [2024-12-15 05:33:01.652862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.652869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:79616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.619 [2024-12-15 05:33:01.652876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.652884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:79624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.619 [2024-12-15 05:33:01.652890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.652898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:79632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.619 [2024-12-15 05:33:01.652904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.652913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:79640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.619 [2024-12-15 05:33:01.652919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.652927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:79648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.619 [2024-12-15 05:33:01.652934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.652942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:79656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.619 [2024-12-15 05:33:01.652948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.652956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:79664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.619 [2024-12-15 05:33:01.652962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.652970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:79672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.619 [2024-12-15 05:33:01.652980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.652988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:79680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.619 [2024-12-15 05:33:01.653001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:79688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.619 [2024-12-15 05:33:01.653015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:79696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.619 [2024-12-15 05:33:01.653031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:79184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:79192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:79208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:79216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:79224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:79232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:79240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:79256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:79264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:79272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:79280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:79288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:79296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:79312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:79320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:79328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:79336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.619 [2024-12-15 05:33:01.653333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:79344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.619 [2024-12-15 05:33:01.653340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:79352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.620 [2024-12-15 05:33:01.653354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:79360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.620 [2024-12-15 05:33:01.653369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:79368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.620 [2024-12-15 05:33:01.653383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:79376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.620 [2024-12-15 05:33:01.653399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:79384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.620 [2024-12-15 05:33:01.653414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:79392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.620 [2024-12-15 05:33:01.653429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:79400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.620 [2024-12-15 05:33:01.653444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:79408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.620 [2024-12-15 05:33:01.653459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:79416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.620 [2024-12-15 05:33:01.653474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:79424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.620 [2024-12-15 05:33:01.653490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:79432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.620 [2024-12-15 05:33:01.653504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:79440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.620 [2024-12-15 05:33:01.653519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:79448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:32:54.620 [2024-12-15 05:33:01.653533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:79704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:79712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:79720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:79728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:79736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:79744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:79752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:79760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:79768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:79776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:79784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:79792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:79800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:79808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:79816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:79824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:79832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:79840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:79848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:79856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:79864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:79872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:79880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:79888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:79896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.620 [2024-12-15 05:33:01.653913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:79904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.620 [2024-12-15 05:33:01.653920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.653928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:79912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.621 [2024-12-15 05:33:01.653935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.653943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:79920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.621 [2024-12-15 05:33:01.653950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.653959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:79928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.621 [2024-12-15 05:33:01.653967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.653975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:79936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.621 [2024-12-15 05:33:01.653982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.653990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:79944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.621 [2024-12-15 05:33:01.654001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.654009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:79952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:32:54.621 [2024-12-15 05:33:01.654015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.654039] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:54.621 [2024-12-15 05:33:01.654046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79960 len:8 PRP1 0x0 PRP2 0x0 00:32:54.621 [2024-12-15 05:33:01.654052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.654062] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:54.621 [2024-12-15 05:33:01.654067] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:54.621 [2024-12-15 05:33:01.654073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79968 len:8 PRP1 0x0 PRP2 0x0 00:32:54.621 [2024-12-15 05:33:01.654079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.654086] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:54.621 [2024-12-15 05:33:01.654091] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:54.621 [2024-12-15 05:33:01.654096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79976 len:8 PRP1 0x0 PRP2 0x0 00:32:54.621 [2024-12-15 05:33:01.654104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.654111] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:54.621 [2024-12-15 05:33:01.654116] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:54.621 [2024-12-15 05:33:01.654122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79984 len:8 PRP1 0x0 PRP2 0x0 00:32:54.621 [2024-12-15 05:33:01.654128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.654134] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:54.621 [2024-12-15 05:33:01.654139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:54.621 [2024-12-15 05:33:01.654145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:79992 len:8 PRP1 0x0 PRP2 0x0 00:32:54.621 [2024-12-15 05:33:01.654151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.654158] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:32:54.621 [2024-12-15 05:33:01.654163] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:32:54.621 [2024-12-15 05:33:01.654168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:80000 len:8 PRP1 0x0 PRP2 0x0 00:32:54.621 [2024-12-15 05:33:01.654176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.654217] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:32:54.621 [2024-12-15 05:33:01.654238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.621 [2024-12-15 05:33:01.654247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.654254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.621 [2024-12-15 05:33:01.654261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.654268] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.621 [2024-12-15 05:33:01.654275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.654282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:32:54.621 [2024-12-15 05:33:01.654288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:32:54.621 [2024-12-15 05:33:01.654295] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:32:54.621 [2024-12-15 05:33:01.657073] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:32:54.621 [2024-12-15 05:33:01.657104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd343a0 (9): Bad file descriptor 00:32:54.621 [2024-12-15 05:33:01.683418] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:32:54.621 11151.70 IOPS, 43.56 MiB/s [2024-12-15T04:33:08.308Z] 11170.27 IOPS, 43.63 MiB/s [2024-12-15T04:33:08.308Z] 11184.42 IOPS, 43.69 MiB/s [2024-12-15T04:33:08.308Z] 11199.69 IOPS, 43.75 MiB/s [2024-12-15T04:33:08.308Z] 11204.71 IOPS, 43.77 MiB/s 00:32:54.621 Latency(us) 00:32:54.621 [2024-12-15T04:33:08.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.621 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:32:54.621 Verification LBA range: start 0x0 length 0x4000 00:32:54.621 NVMe0n1 : 15.00 11223.51 43.84 542.13 0.00 10857.32 415.45 16227.96 00:32:54.621 [2024-12-15T04:33:08.308Z] =================================================================================================================== 00:32:54.621 [2024-12-15T04:33:08.308Z] Total : 11223.51 43.84 542.13 0.00 10857.32 415.45 16227.96 00:32:54.621 Received shutdown signal, test time was about 15.000000 seconds 00:32:54.621 00:32:54.621 Latency(us) 00:32:54.621 [2024-12-15T04:33:08.308Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:54.621 [2024-12-15T04:33:08.308Z] =================================================================================================================== 00:32:54.621 [2024-12-15T04:33:08.308Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:32:54.621 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:32:54.621 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:32:54.621 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:32:54.621 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=482872 00:32:54.621 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 482872 /var/tmp/bdevperf.sock 00:32:54.621 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:32:54.621 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 482872 ']' 00:32:54.621 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:32:54.621 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:54.621 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:32:54.621 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:32:54.621 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:54.621 05:33:07 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:32:54.621 05:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:54.621 05:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:32:54.621 05:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:32:54.621 [2024-12-15 05:33:08.258130] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:32:54.621 05:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:32:54.880 [2024-12-15 05:33:08.450699] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:32:54.880 05:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:55.138 NVMe0n1 00:32:55.138 05:33:08 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:55.705 00:32:55.705 05:33:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:32:55.963 00:32:55.963 05:33:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:55.963 05:33:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:32:55.963 05:33:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:32:56.223 05:33:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:32:59.514 05:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:32:59.514 05:33:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:32:59.514 05:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=483782 00:32:59.514 05:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:32:59.514 05:33:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 483782 00:33:00.891 { 00:33:00.891 "results": [ 00:33:00.891 { 00:33:00.891 "job": "NVMe0n1", 00:33:00.891 "core_mask": "0x1", 00:33:00.891 "workload": "verify", 00:33:00.891 "status": "finished", 00:33:00.891 "verify_range": { 00:33:00.891 "start": 0, 00:33:00.891 "length": 16384 00:33:00.891 }, 00:33:00.891 "queue_depth": 128, 00:33:00.891 "io_size": 4096, 00:33:00.891 "runtime": 1.004842, 00:33:00.891 "iops": 11305.259931412103, 00:33:00.891 "mibps": 44.16117160707853, 00:33:00.891 "io_failed": 0, 00:33:00.891 "io_timeout": 0, 00:33:00.891 "avg_latency_us": 11281.921716968476, 00:33:00.891 "min_latency_us": 2106.5142857142855, 00:33:00.891 "max_latency_us": 10298.514285714286 00:33:00.891 } 00:33:00.891 ], 00:33:00.891 "core_count": 1 00:33:00.891 } 00:33:00.891 05:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:00.891 [2024-12-15 05:33:07.894483] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:00.891 [2024-12-15 05:33:07.894541] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid482872 ] 00:33:00.891 [2024-12-15 05:33:07.968956] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:00.891 [2024-12-15 05:33:07.989139] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:00.891 [2024-12-15 05:33:09.795818] bdev_nvme.c:2057:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:33:00.891 [2024-12-15 05:33:09.795861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.891 [2024-12-15 05:33:09.795872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.891 [2024-12-15 05:33:09.795881] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.891 [2024-12-15 05:33:09.795888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.891 [2024-12-15 05:33:09.795895] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.891 [2024-12-15 05:33:09.795902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.891 [2024-12-15 05:33:09.795909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:00.891 [2024-12-15 05:33:09.795915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:00.891 [2024-12-15 05:33:09.795922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:33:00.891 [2024-12-15 05:33:09.795946] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:33:00.891 [2024-12-15 05:33:09.795961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb223a0 (9): Bad file descriptor 00:33:00.891 [2024-12-15 05:33:09.849139] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:33:00.891 Running I/O for 1 seconds... 00:33:00.891 11232.00 IOPS, 43.88 MiB/s 00:33:00.891 Latency(us) 00:33:00.891 [2024-12-15T04:33:14.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:33:00.891 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:33:00.891 Verification LBA range: start 0x0 length 0x4000 00:33:00.891 NVMe0n1 : 1.00 11305.26 44.16 0.00 0.00 11281.92 2106.51 10298.51 00:33:00.891 [2024-12-15T04:33:14.578Z] =================================================================================================================== 00:33:00.891 [2024-12-15T04:33:14.578Z] Total : 11305.26 44.16 0.00 0.00 11281.92 2106.51 10298.51 00:33:00.891 05:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:00.891 05:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:33:00.892 05:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:01.151 05:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:01.151 05:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:33:01.151 05:33:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:33:01.409 05:33:15 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:33:04.694 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:33:04.694 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:33:04.694 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 482872 00:33:04.694 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 482872 ']' 00:33:04.694 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 482872 00:33:04.694 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:04.694 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:04.694 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 482872 00:33:04.694 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:04.694 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:04.694 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 482872' 00:33:04.694 killing process with pid 482872 00:33:04.694 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 482872 00:33:04.694 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 482872 00:33:04.953 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:33:04.953 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:04.953 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:33:04.953 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:33:04.953 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:33:04.953 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:04.953 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # sync 00:33:04.953 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:04.953 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set +e 00:33:04.953 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:04.953 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:04.953 rmmod nvme_tcp 00:33:05.212 rmmod nvme_fabrics 00:33:05.212 rmmod nvme_keyring 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@128 -- # set -e 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # return 0 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@517 -- # '[' -n 479506 ']' 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@518 -- # killprocess 479506 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 479506 ']' 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 479506 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 479506 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 479506' 00:33:05.212 killing process with pid 479506 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 479506 00:33:05.212 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 479506 00:33:05.471 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:05.471 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:05.471 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:05.471 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # iptr 00:33:05.471 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-save 00:33:05.471 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:05.471 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@791 -- # iptables-restore 00:33:05.471 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:05.471 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:05.471 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:05.471 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:05.471 05:33:18 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.376 05:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:07.376 00:33:07.376 real 0m37.086s 00:33:07.376 user 1m57.485s 00:33:07.376 sys 0m7.888s 00:33:07.376 05:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:07.376 05:33:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:33:07.376 ************************************ 00:33:07.376 END TEST nvmf_failover 00:33:07.376 ************************************ 00:33:07.376 05:33:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:07.376 05:33:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:07.376 05:33:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:07.377 05:33:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:07.636 ************************************ 00:33:07.636 START TEST nvmf_host_discovery 00:33:07.636 ************************************ 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:33:07.636 * Looking for test storage... 00:33:07.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lcov --version 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:07.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.636 --rc genhtml_branch_coverage=1 00:33:07.636 --rc genhtml_function_coverage=1 00:33:07.636 --rc genhtml_legend=1 00:33:07.636 --rc geninfo_all_blocks=1 00:33:07.636 --rc geninfo_unexecuted_blocks=1 00:33:07.636 00:33:07.636 ' 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:07.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.636 --rc genhtml_branch_coverage=1 00:33:07.636 --rc genhtml_function_coverage=1 00:33:07.636 --rc genhtml_legend=1 00:33:07.636 --rc geninfo_all_blocks=1 00:33:07.636 --rc geninfo_unexecuted_blocks=1 00:33:07.636 00:33:07.636 ' 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:07.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.636 --rc genhtml_branch_coverage=1 00:33:07.636 --rc genhtml_function_coverage=1 00:33:07.636 --rc genhtml_legend=1 00:33:07.636 --rc geninfo_all_blocks=1 00:33:07.636 --rc geninfo_unexecuted_blocks=1 00:33:07.636 00:33:07.636 ' 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:07.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:07.636 --rc genhtml_branch_coverage=1 00:33:07.636 --rc genhtml_function_coverage=1 00:33:07.636 --rc genhtml_legend=1 00:33:07.636 --rc geninfo_all_blocks=1 00:33:07.636 --rc geninfo_unexecuted_blocks=1 00:33:07.636 00:33:07.636 ' 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:07.636 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # : 0 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:07.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # xtrace_disable 00:33:07.637 05:33:21 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # pci_devs=() 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # net_devs=() 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # e810=() 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # local -ga e810 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # x722=() 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # local -ga x722 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # mlx=() 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@322 -- # local -ga mlx 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:14.208 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:14.209 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:14.209 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:14.209 Found net devices under 0000:af:00.0: cvl_0_0 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:14.209 Found net devices under 0000:af:00.1: cvl_0_1 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@442 -- # is_hw=yes 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:14.209 05:33:26 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:14.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:14.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:33:14.209 00:33:14.209 --- 10.0.0.2 ping statistics --- 00:33:14.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.209 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:14.209 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:14.209 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:33:14.209 00:33:14.209 --- 10.0.0.1 ping statistics --- 00:33:14.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:14.209 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # return 0 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@509 -- # nvmfpid=488067 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@510 -- # waitforlisten 488067 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 488067 ']' 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:14.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.209 [2024-12-15 05:33:27.187790] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:14.209 [2024-12-15 05:33:27.187837] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:14.209 [2024-12-15 05:33:27.266798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.209 [2024-12-15 05:33:27.288034] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:14.209 [2024-12-15 05:33:27.288068] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:14.209 [2024-12-15 05:33:27.288076] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:14.209 [2024-12-15 05:33:27.288082] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:14.209 [2024-12-15 05:33:27.288087] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:14.209 [2024-12-15 05:33:27.288565] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:14.209 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 [2024-12-15 05:33:27.414675] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 [2024-12-15 05:33:27.426837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 null0 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 null1 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=488222 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 488222 /tmp/host.sock 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 488222 ']' 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:33:14.210 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 [2024-12-15 05:33:27.502643] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:14.210 [2024-12-15 05:33:27.502683] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid488222 ] 00:33:14.210 [2024-12-15 05:33:27.575540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:14.210 [2024-12-15 05:33:27.598276] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:14.210 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:14.469 05:33:27 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.469 [2024-12-15 05:33:28.012334] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.469 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.728 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:14.728 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:33:14.728 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:14.728 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:14.728 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:33:14.728 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.728 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.728 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.728 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:14.728 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:14.728 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:14.729 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:14.729 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:14.729 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:14.729 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:14.729 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:14.729 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:14.729 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:14.729 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:14.729 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:14.729 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:14.729 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:33:14.729 05:33:28 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:15.296 [2024-12-15 05:33:28.753496] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:15.296 [2024-12-15 05:33:28.753517] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:15.296 [2024-12-15 05:33:28.753528] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:15.296 [2024-12-15 05:33:28.879897] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:33:15.555 [2024-12-15 05:33:29.106008] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:33:15.555 [2024-12-15 05:33:29.106773] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xd3df60:1 started. 00:33:15.555 [2024-12-15 05:33:29.108122] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:15.555 [2024-12-15 05:33:29.108138] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:15.555 [2024-12-15 05:33:29.111726] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xd3df60 was disconnected and freed. delete nvme_qpair. 00:33:15.555 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.555 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:15.555 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:15.555 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:15.555 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:15.555 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.555 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:15.555 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.555 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:15.555 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:15.815 [2024-12-15 05:33:29.408168] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xd3e6b0:1 started. 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:15.815 [2024-12-15 05:33:29.412513] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xd3e6b0 was disconnected and freed. delete nvme_qpair. 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.815 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:15.816 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.075 [2024-12-15 05:33:29.504317] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:16.075 [2024-12-15 05:33:29.505418] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:16.075 [2024-12-15 05:33:29.505438] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:16.075 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.075 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:16.075 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:16.076 [2024-12-15 05:33:29.631802] bdev_nvme.c:7440:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:33:16.076 05:33:29 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:33:16.076 [2024-12-15 05:33:29.697342] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:33:16.076 [2024-12-15 05:33:29.697374] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:16.076 [2024-12-15 05:33:29.697382] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:33:16.076 [2024-12-15 05:33:29.697386] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:17.013 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:17.273 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:17.273 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:17.273 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.273 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.273 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.273 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:17.273 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:17.273 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:17.273 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:17.273 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:17.273 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.273 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.273 [2024-12-15 05:33:30.748301] bdev_nvme.c:7498:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:33:17.273 [2024-12-15 05:33:30.748331] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:17.273 [2024-12-15 05:33:30.750175] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.273 [2024-12-15 05:33:30.750205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.273 [2024-12-15 05:33:30.750219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.273 [2024-12-15 05:33:30.750230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.273 [2024-12-15 05:33:30.750241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.273 [2024-12-15 05:33:30.750252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.273 [2024-12-15 05:33:30.750269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:33:17.274 [2024-12-15 05:33:30.750280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:33:17.274 [2024-12-15 05:33:30.750291] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0fef0 is same with the state(6) to be set 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:17.274 [2024-12-15 05:33:30.760183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0fef0 (9): Bad file descriptor 00:33:17.274 [2024-12-15 05:33:30.770218] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:17.274 [2024-12-15 05:33:30.770232] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:17.274 [2024-12-15 05:33:30.770239] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:17.274 [2024-12-15 05:33:30.770244] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:17.274 [2024-12-15 05:33:30.770264] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:17.274 [2024-12-15 05:33:30.770538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.274 [2024-12-15 05:33:30.770554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0fef0 with addr=10.0.0.2, port=4420 00:33:17.274 [2024-12-15 05:33:30.770563] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0fef0 is same with the state(6) to be set 00:33:17.274 [2024-12-15 05:33:30.770576] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0fef0 (9): Bad file descriptor 00:33:17.274 [2024-12-15 05:33:30.770593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:17.274 [2024-12-15 05:33:30.770601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:17.274 [2024-12-15 05:33:30.770610] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:17.274 [2024-12-15 05:33:30.770616] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:17.274 [2024-12-15 05:33:30.770621] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:17.274 [2024-12-15 05:33:30.770626] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.274 [2024-12-15 05:33:30.780295] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:17.274 [2024-12-15 05:33:30.780304] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:17.274 [2024-12-15 05:33:30.780309] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:17.274 [2024-12-15 05:33:30.780313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:17.274 [2024-12-15 05:33:30.780327] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:17.274 [2024-12-15 05:33:30.780529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.274 [2024-12-15 05:33:30.780543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0fef0 with addr=10.0.0.2, port=4420 00:33:17.274 [2024-12-15 05:33:30.780552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0fef0 is same with the state(6) to be set 00:33:17.274 [2024-12-15 05:33:30.780563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0fef0 (9): Bad file descriptor 00:33:17.274 [2024-12-15 05:33:30.780573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:17.274 [2024-12-15 05:33:30.780579] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:17.274 [2024-12-15 05:33:30.780586] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:17.274 [2024-12-15 05:33:30.780592] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:17.274 [2024-12-15 05:33:30.780596] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:17.274 [2024-12-15 05:33:30.780600] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:17.274 [2024-12-15 05:33:30.790356] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:17.274 [2024-12-15 05:33:30.790367] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:17.274 [2024-12-15 05:33:30.790371] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:17.274 [2024-12-15 05:33:30.790375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:17.274 [2024-12-15 05:33:30.790389] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:17.274 [2024-12-15 05:33:30.790582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.274 [2024-12-15 05:33:30.790596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0fef0 with addr=10.0.0.2, port=4420 00:33:17.274 [2024-12-15 05:33:30.790604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0fef0 is same with the state(6) to be set 00:33:17.274 [2024-12-15 05:33:30.790615] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0fef0 (9): Bad file descriptor 00:33:17.274 [2024-12-15 05:33:30.790625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:17.274 [2024-12-15 05:33:30.790631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:17.274 [2024-12-15 05:33:30.790638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:17.274 [2024-12-15 05:33:30.790644] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:17.274 [2024-12-15 05:33:30.790651] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:17.274 [2024-12-15 05:33:30.790655] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:17.274 [2024-12-15 05:33:30.800421] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:17.274 [2024-12-15 05:33:30.800435] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:17.274 [2024-12-15 05:33:30.800440] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:17.274 [2024-12-15 05:33:30.800444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:17.274 [2024-12-15 05:33:30.800458] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:17.274 [2024-12-15 05:33:30.800620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.274 [2024-12-15 05:33:30.800636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0fef0 with addr=10.0.0.2, port=4420 00:33:17.274 [2024-12-15 05:33:30.800644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0fef0 is same with the state(6) to be set 00:33:17.274 [2024-12-15 05:33:30.800656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0fef0 (9): Bad file descriptor 00:33:17.274 [2024-12-15 05:33:30.800665] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:17.274 [2024-12-15 05:33:30.800671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:17.274 [2024-12-15 05:33:30.800678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:17.274 [2024-12-15 05:33:30.800683] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:17.274 [2024-12-15 05:33:30.800688] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:17.274 [2024-12-15 05:33:30.800692] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.274 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:17.275 [2024-12-15 05:33:30.810489] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:17.275 [2024-12-15 05:33:30.810506] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:17.275 [2024-12-15 05:33:30.810510] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:17.275 [2024-12-15 05:33:30.810514] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:17.275 [2024-12-15 05:33:30.810528] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:17.275 [2024-12-15 05:33:30.810676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.275 [2024-12-15 05:33:30.810690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0fef0 with addr=10.0.0.2, port=4420 00:33:17.275 [2024-12-15 05:33:30.810698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0fef0 is same with the state(6) to be set 00:33:17.275 [2024-12-15 05:33:30.810709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0fef0 (9): Bad file descriptor 00:33:17.275 [2024-12-15 05:33:30.810718] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:17.275 [2024-12-15 05:33:30.810724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:17.275 [2024-12-15 05:33:30.810732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:17.275 [2024-12-15 05:33:30.810738] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:17.275 [2024-12-15 05:33:30.810743] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:17.275 [2024-12-15 05:33:30.810747] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:17.275 [2024-12-15 05:33:30.820560] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:17.275 [2024-12-15 05:33:30.820573] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:17.275 [2024-12-15 05:33:30.820578] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:17.275 [2024-12-15 05:33:30.820581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:17.275 [2024-12-15 05:33:30.820596] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:17.275 [2024-12-15 05:33:30.820756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.275 [2024-12-15 05:33:30.820768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0fef0 with addr=10.0.0.2, port=4420 00:33:17.275 [2024-12-15 05:33:30.820775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0fef0 is same with the state(6) to be set 00:33:17.275 [2024-12-15 05:33:30.820786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0fef0 (9): Bad file descriptor 00:33:17.275 [2024-12-15 05:33:30.820796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:17.275 [2024-12-15 05:33:30.820802] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:17.275 [2024-12-15 05:33:30.820809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:17.275 [2024-12-15 05:33:30.820814] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:17.275 [2024-12-15 05:33:30.820819] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:17.275 [2024-12-15 05:33:30.820823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:17.275 [2024-12-15 05:33:30.830627] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:33:17.275 [2024-12-15 05:33:30.830644] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:33:17.275 [2024-12-15 05:33:30.830649] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:33:17.275 [2024-12-15 05:33:30.830652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:33:17.275 [2024-12-15 05:33:30.830667] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:33:17.275 [2024-12-15 05:33:30.830877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:17.275 [2024-12-15 05:33:30.830891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd0fef0 with addr=10.0.0.2, port=4420 00:33:17.275 [2024-12-15 05:33:30.830899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xd0fef0 is same with the state(6) to be set 00:33:17.275 [2024-12-15 05:33:30.830910] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd0fef0 (9): Bad file descriptor 00:33:17.275 [2024-12-15 05:33:30.830926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:33:17.275 [2024-12-15 05:33:30.830934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:33:17.275 [2024-12-15 05:33:30.830941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:33:17.275 [2024-12-15 05:33:30.830947] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:33:17.275 [2024-12-15 05:33:30.830951] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:33:17.275 [2024-12-15 05:33:30.830955] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:33:17.275 [2024-12-15 05:33:30.835945] bdev_nvme.c:7303:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:33:17.275 [2024-12-15 05:33:30.835960] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:33:17.275 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.276 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:33:17.535 05:33:30 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:17.535 05:33:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.913 [2024-12-15 05:33:32.161115] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:33:18.913 [2024-12-15 05:33:32.161132] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:33:18.913 [2024-12-15 05:33:32.161144] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:33:18.914 [2024-12-15 05:33:32.247405] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:33:18.914 [2024-12-15 05:33:32.547719] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:33:18.914 [2024-12-15 05:33:32.548262] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0xd40560:1 started. 00:33:18.914 [2024-12-15 05:33:32.549835] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:33:18.914 [2024-12-15 05:33:32.549859] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.914 request: 00:33:18.914 { 00:33:18.914 "name": "nvme", 00:33:18.914 "trtype": "tcp", 00:33:18.914 "traddr": "10.0.0.2", 00:33:18.914 "adrfam": "ipv4", 00:33:18.914 "trsvcid": "8009", 00:33:18.914 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:18.914 "wait_for_attach": true, 00:33:18.914 "method": "bdev_nvme_start_discovery", 00:33:18.914 "req_id": 1 00:33:18.914 } 00:33:18.914 Got JSON-RPC error response 00:33:18.914 response: 00:33:18.914 { 00:33:18.914 "code": -17, 00:33:18.914 "message": "File exists" 00:33:18.914 } 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:18.914 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:18.914 [2024-12-15 05:33:32.591484] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0xd40560 was disconnected and freed. delete nvme_qpair. 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.173 request: 00:33:19.173 { 00:33:19.173 "name": "nvme_second", 00:33:19.173 "trtype": "tcp", 00:33:19.173 "traddr": "10.0.0.2", 00:33:19.173 "adrfam": "ipv4", 00:33:19.173 "trsvcid": "8009", 00:33:19.173 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:19.173 "wait_for_attach": true, 00:33:19.173 "method": "bdev_nvme_start_discovery", 00:33:19.173 "req_id": 1 00:33:19.173 } 00:33:19.173 Got JSON-RPC error response 00:33:19.173 response: 00:33:19.173 { 00:33:19.173 "code": -17, 00:33:19.173 "message": "File exists" 00:33:19.173 } 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:19.173 05:33:32 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:20.550 [2024-12-15 05:33:33.797291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:20.550 [2024-12-15 05:33:33.797318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3d790 with addr=10.0.0.2, port=8010 00:33:20.550 [2024-12-15 05:33:33.797330] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:20.550 [2024-12-15 05:33:33.797337] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:20.550 [2024-12-15 05:33:33.797344] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:21.118 [2024-12-15 05:33:34.799724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:33:21.118 [2024-12-15 05:33:34.799749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xd3fd70 with addr=10.0.0.2, port=8010 00:33:21.118 [2024-12-15 05:33:34.799761] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:33:21.118 [2024-12-15 05:33:34.799767] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:33:21.118 [2024-12-15 05:33:34.799774] bdev_nvme.c:7584:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:33:22.493 [2024-12-15 05:33:35.801903] bdev_nvme.c:7559:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:33:22.493 request: 00:33:22.493 { 00:33:22.493 "name": "nvme_second", 00:33:22.493 "trtype": "tcp", 00:33:22.493 "traddr": "10.0.0.2", 00:33:22.493 "adrfam": "ipv4", 00:33:22.493 "trsvcid": "8010", 00:33:22.493 "hostnqn": "nqn.2021-12.io.spdk:test", 00:33:22.493 "wait_for_attach": false, 00:33:22.493 "attach_timeout_ms": 3000, 00:33:22.493 "method": "bdev_nvme_start_discovery", 00:33:22.493 "req_id": 1 00:33:22.493 } 00:33:22.493 Got JSON-RPC error response 00:33:22.493 response: 00:33:22.493 { 00:33:22.493 "code": -110, 00:33:22.493 "message": "Connection timed out" 00:33:22.493 } 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 488222 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@516 -- # nvmfcleanup 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # sync 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set +e 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # for i in {1..20} 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:33:22.493 rmmod nvme_tcp 00:33:22.493 rmmod nvme_fabrics 00:33:22.493 rmmod nvme_keyring 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@128 -- # set -e 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # return 0 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@517 -- # '[' -n 488067 ']' 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@518 -- # killprocess 488067 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 488067 ']' 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 488067 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 488067 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 488067' 00:33:22.493 killing process with pid 488067 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 488067 00:33:22.493 05:33:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 488067 00:33:22.493 05:33:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:33:22.493 05:33:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:33:22.493 05:33:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:33:22.493 05:33:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # iptr 00:33:22.493 05:33:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-save 00:33:22.493 05:33:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:33:22.493 05:33:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@791 -- # iptables-restore 00:33:22.493 05:33:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:33:22.493 05:33:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # remove_spdk_ns 00:33:22.493 05:33:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:22.493 05:33:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:22.493 05:33:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:33:25.031 00:33:25.031 real 0m17.133s 00:33:25.031 user 0m20.504s 00:33:25.031 sys 0m5.734s 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:33:25.031 ************************************ 00:33:25.031 END TEST nvmf_host_discovery 00:33:25.031 ************************************ 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:33:25.031 ************************************ 00:33:25.031 START TEST nvmf_host_multipath_status 00:33:25.031 ************************************ 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:33:25.031 * Looking for test storage... 00:33:25.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lcov --version 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:33:25.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.031 --rc genhtml_branch_coverage=1 00:33:25.031 --rc genhtml_function_coverage=1 00:33:25.031 --rc genhtml_legend=1 00:33:25.031 --rc geninfo_all_blocks=1 00:33:25.031 --rc geninfo_unexecuted_blocks=1 00:33:25.031 00:33:25.031 ' 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:33:25.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.031 --rc genhtml_branch_coverage=1 00:33:25.031 --rc genhtml_function_coverage=1 00:33:25.031 --rc genhtml_legend=1 00:33:25.031 --rc geninfo_all_blocks=1 00:33:25.031 --rc geninfo_unexecuted_blocks=1 00:33:25.031 00:33:25.031 ' 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:33:25.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.031 --rc genhtml_branch_coverage=1 00:33:25.031 --rc genhtml_function_coverage=1 00:33:25.031 --rc genhtml_legend=1 00:33:25.031 --rc geninfo_all_blocks=1 00:33:25.031 --rc geninfo_unexecuted_blocks=1 00:33:25.031 00:33:25.031 ' 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:33:25.031 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:25.031 --rc genhtml_branch_coverage=1 00:33:25.031 --rc genhtml_function_coverage=1 00:33:25.031 --rc genhtml_legend=1 00:33:25.031 --rc geninfo_all_blocks=1 00:33:25.031 --rc geninfo_unexecuted_blocks=1 00:33:25.031 00:33:25.031 ' 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.031 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # : 0 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:33:25.032 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@55 -- # have_pci_nics=0 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@476 -- # prepare_net_devs 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@438 -- # local -g is_hw=no 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@440 -- # remove_spdk_ns 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # xtrace_disable 00:33:25.032 05:33:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # pci_devs=() 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # local -a pci_devs 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # pci_net_devs=() 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # pci_drivers=() 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # local -A pci_drivers 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # net_devs=() 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@319 -- # local -ga net_devs 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # e810=() 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # local -ga e810 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # x722=() 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # local -ga x722 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # mlx=() 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@322 -- # local -ga mlx 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:33:31.600 Found 0000:af:00.0 (0x8086 - 0x159b) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:33:31.600 Found 0000:af:00.1 (0x8086 - 0x159b) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:33:31.600 Found net devices under 0000:af:00.0: cvl_0_0 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # [[ up == up ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:33:31.600 Found net devices under 0000:af:00.1: cvl_0_1 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@442 -- # is_hw=yes 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:33:31.600 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:31.600 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.348 ms 00:33:31.600 00:33:31.600 --- 10.0.0.2 ping statistics --- 00:33:31.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.600 rtt min/avg/max/mdev = 0.348/0.348/0.348/0.000 ms 00:33:31.600 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:33:31.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:31.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.184 ms 00:33:31.600 00:33:31.600 --- 10.0.0.1 ping statistics --- 00:33:31.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:31.601 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # return 0 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@509 -- # nvmfpid=493197 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@510 -- # waitforlisten 493197 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 493197 ']' 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:31.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:31.601 [2024-12-15 05:33:44.524598] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:33:31.601 [2024-12-15 05:33:44.524647] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:31.601 [2024-12-15 05:33:44.602398] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:33:31.601 [2024-12-15 05:33:44.625160] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:31.601 [2024-12-15 05:33:44.625198] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:31.601 [2024-12-15 05:33:44.625205] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:31.601 [2024-12-15 05:33:44.625211] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:31.601 [2024-12-15 05:33:44.625216] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:31.601 [2024-12-15 05:33:44.626344] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:33:31.601 [2024-12-15 05:33:44.626345] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=493197 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:33:31.601 [2024-12-15 05:33:44.918827] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:31.601 05:33:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:33:31.601 Malloc0 00:33:31.601 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:33:31.859 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:33:32.118 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:32.118 [2024-12-15 05:33:45.719632] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:32.118 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:33:32.377 [2024-12-15 05:33:45.932191] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:33:32.377 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=493445 00:33:32.377 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:33:32.377 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:33:32.377 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 493445 /var/tmp/bdevperf.sock 00:33:32.377 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 493445 ']' 00:33:32.377 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:33:32.377 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.377 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:33:32.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:33:32.377 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.377 05:33:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:33:32.637 05:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.637 05:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:33:32.637 05:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:33:32.896 05:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:33.155 Nvme0n1 00:33:33.155 05:33:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:33:33.414 Nvme0n1 00:33:33.414 05:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:33:33.414 05:33:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:33:35.949 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:33:35.949 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:35.949 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:35.950 05:33:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:33:36.887 05:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:33:36.887 05:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:36.887 05:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:36.887 05:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:37.146 05:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.146 05:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:37.146 05:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:37.146 05:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.404 05:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:37.404 05:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:37.404 05:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.404 05:33:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:37.404 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.404 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:37.404 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.404 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:37.663 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.663 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:37.663 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.663 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:37.922 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:37.922 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:37.922 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:37.922 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:38.180 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:38.180 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:33:38.180 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:38.439 05:33:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:38.439 05:33:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:33:39.818 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:33:39.819 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:39.819 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.819 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:39.819 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:39.819 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:39.819 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:39.819 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:40.078 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.078 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:40.078 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.078 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:40.078 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.078 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:40.078 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.078 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:40.337 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.337 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:40.337 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:40.337 05:33:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.595 05:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.595 05:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:40.596 05:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:40.596 05:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:40.866 05:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:40.866 05:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:33:40.866 05:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:41.125 05:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:41.385 05:33:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:33:42.317 05:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:33:42.317 05:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:42.317 05:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.317 05:33:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:42.576 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.576 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:42.576 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.576 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:42.576 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:42.576 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:42.576 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.576 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:42.835 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:42.835 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:42.835 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:42.835 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:43.094 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.094 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:43.094 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.094 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:43.368 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.368 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:43.368 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:43.368 05:33:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:43.645 05:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:43.645 05:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:33:43.645 05:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:43.645 05:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:43.908 05:33:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:33:44.885 05:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:33:44.885 05:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:44.885 05:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:44.885 05:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:45.161 05:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.161 05:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:45.161 05:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.161 05:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:45.439 05:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:45.439 05:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:45.439 05:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.439 05:33:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:45.724 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.724 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:45.724 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.724 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:45.724 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:45.724 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:45.724 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:45.724 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:46.002 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:46.002 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:46.002 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:46.002 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:46.265 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:46.265 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:33:46.265 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:46.535 05:33:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:33:46.535 05:34:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:33:47.519 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:33:47.519 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:47.519 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.519 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:47.812 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:47.812 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:33:47.812 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:47.812 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:48.091 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.091 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:48.092 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.092 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:48.092 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.092 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:48.092 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.092 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:48.374 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:48.374 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:48.374 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.374 05:34:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:48.643 05:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.643 05:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:33:48.643 05:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:48.643 05:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:48.923 05:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:48.923 05:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:33:48.923 05:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:33:48.923 05:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:49.196 05:34:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:33:50.169 05:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:33:50.169 05:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:50.169 05:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.169 05:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:50.428 05:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:50.428 05:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:50.428 05:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.428 05:34:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:50.687 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.687 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:50.687 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.687 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:50.946 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.946 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:50.946 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.946 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:50.946 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:50.946 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:33:50.946 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:50.946 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:51.205 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:51.205 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:51.205 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:51.205 05:34:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:51.464 05:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:51.464 05:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:33:51.722 05:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:33:51.722 05:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:33:51.980 05:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:51.980 05:34:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:33:53.355 05:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:33:53.355 05:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:53.355 05:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.355 05:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:53.355 05:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.355 05:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:53.355 05:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.355 05:34:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:53.614 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.615 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:53.615 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.615 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:53.615 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.615 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:53.615 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.615 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:53.873 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:53.873 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:53.873 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:53.873 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:54.132 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.132 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:54.132 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:54.132 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:54.391 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:54.391 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:33:54.391 05:34:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:54.649 05:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:33:54.908 05:34:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:33:55.846 05:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:33:55.846 05:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:33:55.846 05:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:55.846 05:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:56.105 05:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:33:56.105 05:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:56.105 05:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.105 05:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:56.105 05:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.105 05:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:56.105 05:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.105 05:34:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:56.365 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.365 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:56.365 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:56.365 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.623 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.623 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:56.623 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.623 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:56.882 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:56.882 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:56.882 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:56.882 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:57.141 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:57.141 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:33:57.141 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:33:57.400 05:34:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:33:57.400 05:34:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:33:58.777 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:33:58.777 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:33:58.777 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.777 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:33:58.777 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:58.777 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:33:58.777 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:58.777 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:33:59.036 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.036 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:33:59.036 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.036 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:33:59.294 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.294 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:33:59.294 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.294 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:33:59.294 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.294 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:33:59.294 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.294 05:34:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:33:59.553 05:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.553 05:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:33:59.553 05:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:33:59.553 05:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:33:59.813 05:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:33:59.813 05:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:33:59.813 05:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:34:00.072 05:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:34:00.331 05:34:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:34:01.269 05:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:34:01.269 05:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:34:01.269 05:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.269 05:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:34:01.528 05:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.528 05:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:34:01.528 05:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.528 05:34:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:34:01.528 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:01.528 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:34:01.528 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.528 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:34:01.786 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:01.786 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:34:01.786 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:01.786 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:34:02.045 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.045 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:34:02.045 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.045 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:34:02.304 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:34:02.304 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:34:02.304 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:34:02.304 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:34:02.304 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:34:02.304 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 493445 00:34:02.304 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 493445 ']' 00:34:02.304 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 493445 00:34:02.304 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:02.304 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:02.304 05:34:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493445 00:34:02.567 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:34:02.567 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:34:02.567 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493445' 00:34:02.567 killing process with pid 493445 00:34:02.567 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 493445 00:34:02.567 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 493445 00:34:02.567 { 00:34:02.567 "results": [ 00:34:02.567 { 00:34:02.567 "job": "Nvme0n1", 00:34:02.567 "core_mask": "0x4", 00:34:02.567 "workload": "verify", 00:34:02.567 "status": "terminated", 00:34:02.567 "verify_range": { 00:34:02.567 "start": 0, 00:34:02.567 "length": 16384 00:34:02.567 }, 00:34:02.567 "queue_depth": 128, 00:34:02.567 "io_size": 4096, 00:34:02.567 "runtime": 28.898435, 00:34:02.567 "iops": 10672.550260939735, 00:34:02.567 "mibps": 41.68964945679584, 00:34:02.567 "io_failed": 0, 00:34:02.567 "io_timeout": 0, 00:34:02.567 "avg_latency_us": 11957.093956206905, 00:34:02.567 "min_latency_us": 185.2952380952381, 00:34:02.567 "max_latency_us": 3019898.88 00:34:02.567 } 00:34:02.567 ], 00:34:02.567 "core_count": 1 00:34:02.567 } 00:34:02.567 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 493445 00:34:02.567 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:02.567 [2024-12-15 05:33:46.007250] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:34:02.567 [2024-12-15 05:33:46.007305] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid493445 ] 00:34:02.567 [2024-12-15 05:33:46.079552] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:02.567 [2024-12-15 05:33:46.102132] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:34:02.567 Running I/O for 90 seconds... 00:34:02.567 11478.00 IOPS, 44.84 MiB/s [2024-12-15T04:34:16.254Z] 11536.50 IOPS, 45.06 MiB/s [2024-12-15T04:34:16.254Z] 11482.67 IOPS, 44.85 MiB/s [2024-12-15T04:34:16.254Z] 11487.50 IOPS, 44.87 MiB/s [2024-12-15T04:34:16.254Z] 11515.00 IOPS, 44.98 MiB/s [2024-12-15T04:34:16.254Z] 11503.83 IOPS, 44.94 MiB/s [2024-12-15T04:34:16.254Z] 11498.71 IOPS, 44.92 MiB/s [2024-12-15T04:34:16.254Z] 11519.38 IOPS, 45.00 MiB/s [2024-12-15T04:34:16.254Z] 11517.78 IOPS, 44.99 MiB/s [2024-12-15T04:34:16.254Z] 11508.50 IOPS, 44.96 MiB/s [2024-12-15T04:34:16.254Z] 11503.09 IOPS, 44.93 MiB/s [2024-12-15T04:34:16.254Z] 11502.75 IOPS, 44.93 MiB/s [2024-12-15T04:34:16.254Z] [2024-12-15 05:33:59.944173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:129520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:129528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:129536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:129544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:129552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:129560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:129568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:129576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:129584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:129592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:129600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:129608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:129616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:129624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:129632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:129640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:129648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:129656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:129664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:129672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.567 [2024-12-15 05:33:59.944627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:02.567 [2024-12-15 05:33:59.944639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:129680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:129688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:129696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:129704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:129712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:129720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:129728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:129736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:129744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:129752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:129760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:129768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:129776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:129784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:129792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:129800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:129808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:129816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.944980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.944998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:129824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:129832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:129840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:129848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:129856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:129864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:129872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:129880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:129888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:129896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:129904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:129264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.568 [2024-12-15 05:33:59.945555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:129272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.568 [2024-12-15 05:33:59.945576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:129280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.568 [2024-12-15 05:33:59.945597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:129288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.568 [2024-12-15 05:33:59.945617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:129296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.568 [2024-12-15 05:33:59.945638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:129304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.568 [2024-12-15 05:33:59.945658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:129312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.568 [2024-12-15 05:33:59.945678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:129912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:129920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:129928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:129936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.568 [2024-12-15 05:33:59.945780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:34:02.568 [2024-12-15 05:33:59.945794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:129944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.945801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.945815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:129952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.945822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.945837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:129960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.945844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.945858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:129968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.945864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.945879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:129976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.945886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.945900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:129984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.945907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.945921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:129992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.945936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.945950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:130000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.945957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.945971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:130008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.945977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.945998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:130016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:130024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:130032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:130040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:130048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:130056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:130064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:130072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:130080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:130088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:130096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:130104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:130112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:130120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:130128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:130136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:130144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:130152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:130160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:130168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:130176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:130184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:130192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:130200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:130208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:130216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:129320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.569 [2024-12-15 05:33:59.946675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:129328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.569 [2024-12-15 05:33:59.946706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:130224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:130232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:130240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.569 [2024-12-15 05:33:59.946777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:34:02.569 [2024-12-15 05:33:59.946793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:130248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:33:59.946801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.946857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:130256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:33:59.946865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.946883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:130264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:33:59.946890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.946907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:130272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:33:59.946914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.946931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:129336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.946938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.946955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:129344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.946962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.946979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:129352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.946986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:129360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:129368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:129376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:129384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:129392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:129400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:129408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:129416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:129424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:129432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:129440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:129448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:130280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:33:59.947311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:129456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:129464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:129472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:129480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:129488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:129496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:129504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:33:59.947499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:129512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.570 [2024-12-15 05:33:59.947507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:02.570 11325.38 IOPS, 44.24 MiB/s [2024-12-15T04:34:16.257Z] 10516.43 IOPS, 41.08 MiB/s [2024-12-15T04:34:16.257Z] 9815.33 IOPS, 38.34 MiB/s [2024-12-15T04:34:16.257Z] 9344.94 IOPS, 36.50 MiB/s [2024-12-15T04:34:16.257Z] 9471.00 IOPS, 37.00 MiB/s [2024-12-15T04:34:16.257Z] 9583.89 IOPS, 37.44 MiB/s [2024-12-15T04:34:16.257Z] 9754.89 IOPS, 38.11 MiB/s [2024-12-15T04:34:16.257Z] 9952.65 IOPS, 38.88 MiB/s [2024-12-15T04:34:16.257Z] 10123.00 IOPS, 39.54 MiB/s [2024-12-15T04:34:16.257Z] 10193.45 IOPS, 39.82 MiB/s [2024-12-15T04:34:16.257Z] 10246.61 IOPS, 40.03 MiB/s [2024-12-15T04:34:16.257Z] 10312.62 IOPS, 40.28 MiB/s [2024-12-15T04:34:16.257Z] 10445.48 IOPS, 40.80 MiB/s [2024-12-15T04:34:16.257Z] 10563.27 IOPS, 41.26 MiB/s [2024-12-15T04:34:16.257Z] [2024-12-15 05:34:13.736983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:34:13.737043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:34:13.737078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:34:13.737088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:34:13.737106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:29744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:34:13.737113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:34:13.737125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:29760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:34:13.737132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:34:13.737146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:29776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:34:13.737153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:34:13.737165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:29792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:34:13.737172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:34:13.737185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:29808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:34:13.737192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:34:13.737205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:29824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:34:13.737212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:34:13.737224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:29840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.570 [2024-12-15 05:34:13.737232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:34:02.570 [2024-12-15 05:34:13.737244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:29856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:29872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:29888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:29904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:29920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:29936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:29952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:29968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:29984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:30016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:30032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:30048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:30064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:30080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:30096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:30112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:30128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:30144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:30160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:30176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.737686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:30208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.737693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.738038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:30224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.738053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.738069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.738076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.738091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:30256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.738099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.738112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:30272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.738120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.738132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:30288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.738139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.738152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:30304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.738160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.738172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:30320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.738179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.738193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:30336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.738201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.738214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:30352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.738221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.738234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:30368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.738240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:34:02.571 [2024-12-15 05:34:13.738252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:30384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.571 [2024-12-15 05:34:13.738259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.738272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:30400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.738279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.738292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:30416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.738298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.738310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:30432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.738317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.738330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:30448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.738337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.738350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:30464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.738357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.738368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:30480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.738376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.738389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:30496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.738396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.738409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:30512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.738416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.739374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:30528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.739390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.739405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:30544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.739412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.739424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:30560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.739432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.739444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:30576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.739451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.739464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:30592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.739470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.739482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:30608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.739490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.739503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.572 [2024-12-15 05:34:13.739509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.739522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.572 [2024-12-15 05:34:13.739528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.739541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:29696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.572 [2024-12-15 05:34:13.739548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.739560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.572 [2024-12-15 05:34:13.739567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.739579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.572 [2024-12-15 05:34:13.739586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.739599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:34:02.572 [2024-12-15 05:34:13.739607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:34:02.572 [2024-12-15 05:34:13.739619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:34:02.572 [2024-12-15 05:34:13.739629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:34:02.572 10642.22 IOPS, 41.57 MiB/s [2024-12-15T04:34:16.259Z] 10669.96 IOPS, 41.68 MiB/s [2024-12-15T04:34:16.259Z] Received shutdown signal, test time was about 28.899090 seconds 00:34:02.572 00:34:02.572 Latency(us) 00:34:02.572 [2024-12-15T04:34:16.259Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:34:02.572 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:34:02.572 Verification LBA range: start 0x0 length 0x4000 00:34:02.572 Nvme0n1 : 28.90 10672.55 41.69 0.00 0.00 11957.09 185.30 3019898.88 00:34:02.572 [2024-12-15T04:34:16.259Z] =================================================================================================================== 00:34:02.572 [2024-12-15T04:34:16.259Z] Total : 10672.55 41.69 0.00 0.00 11957.09 185.30 3019898.88 00:34:02.572 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:02.831 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:34:02.831 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:34:02.831 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:34:02.831 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:02.831 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # sync 00:34:02.831 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:02.832 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set +e 00:34:02.832 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:02.832 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:02.832 rmmod nvme_tcp 00:34:02.832 rmmod nvme_fabrics 00:34:02.832 rmmod nvme_keyring 00:34:02.832 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:02.832 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@128 -- # set -e 00:34:02.832 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # return 0 00:34:02.832 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@517 -- # '[' -n 493197 ']' 00:34:02.832 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@518 -- # killprocess 493197 00:34:02.832 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 493197 ']' 00:34:02.832 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 493197 00:34:02.832 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:34:02.832 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:02.832 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 493197 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 493197' 00:34:03.091 killing process with pid 493197 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 493197 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 493197 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # iptr 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-save 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@791 -- # iptables-restore 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:03.091 05:34:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.628 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:05.628 00:34:05.628 real 0m40.531s 00:34:05.628 user 1m49.722s 00:34:05.628 sys 0m11.577s 00:34:05.628 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:05.628 05:34:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:34:05.628 ************************************ 00:34:05.628 END TEST nvmf_host_multipath_status 00:34:05.628 ************************************ 00:34:05.628 05:34:18 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:05.628 05:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:05.628 05:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:05.628 05:34:18 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:05.628 ************************************ 00:34:05.628 START TEST nvmf_discovery_remove_ifc 00:34:05.628 ************************************ 00:34:05.628 05:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:34:05.628 * Looking for test storage... 00:34:05.628 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:05.628 05:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:05.628 05:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lcov --version 00:34:05.628 05:34:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:05.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.628 --rc genhtml_branch_coverage=1 00:34:05.628 --rc genhtml_function_coverage=1 00:34:05.628 --rc genhtml_legend=1 00:34:05.628 --rc geninfo_all_blocks=1 00:34:05.628 --rc geninfo_unexecuted_blocks=1 00:34:05.628 00:34:05.628 ' 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:05.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.628 --rc genhtml_branch_coverage=1 00:34:05.628 --rc genhtml_function_coverage=1 00:34:05.628 --rc genhtml_legend=1 00:34:05.628 --rc geninfo_all_blocks=1 00:34:05.628 --rc geninfo_unexecuted_blocks=1 00:34:05.628 00:34:05.628 ' 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:05.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.628 --rc genhtml_branch_coverage=1 00:34:05.628 --rc genhtml_function_coverage=1 00:34:05.628 --rc genhtml_legend=1 00:34:05.628 --rc geninfo_all_blocks=1 00:34:05.628 --rc geninfo_unexecuted_blocks=1 00:34:05.628 00:34:05.628 ' 00:34:05.628 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:05.628 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:05.628 --rc genhtml_branch_coverage=1 00:34:05.628 --rc genhtml_function_coverage=1 00:34:05.628 --rc genhtml_legend=1 00:34:05.628 --rc geninfo_all_blocks=1 00:34:05.628 --rc geninfo_unexecuted_blocks=1 00:34:05.628 00:34:05.628 ' 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # : 0 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:05.629 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # xtrace_disable 00:34:05.629 05:34:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # pci_devs=() 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # net_devs=() 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # e810=() 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # local -ga e810 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # x722=() 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # local -ga x722 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # mlx=() 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@322 -- # local -ga mlx 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:12.201 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:12.201 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:12.201 Found net devices under 0000:af:00.0: cvl_0_0 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:12.201 Found net devices under 0000:af:00.1: cvl_0_1 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@442 -- # is_hw=yes 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:12.201 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:12.202 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:12.202 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:34:12.202 00:34:12.202 --- 10.0.0.2 ping statistics --- 00:34:12.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.202 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:12.202 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:12.202 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.185 ms 00:34:12.202 00:34:12.202 --- 10.0.0.1 ping statistics --- 00:34:12.202 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:12.202 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # return 0 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:12.202 05:34:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@509 -- # nvmfpid=501816 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@510 -- # waitforlisten 501816 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 501816 ']' 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:12.202 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.202 [2024-12-15 05:34:25.050584] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:34:12.202 [2024-12-15 05:34:25.050626] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:12.202 [2024-12-15 05:34:25.130443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.202 [2024-12-15 05:34:25.151308] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:12.202 [2024-12-15 05:34:25.151343] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:12.202 [2024-12-15 05:34:25.151350] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:12.202 [2024-12-15 05:34:25.151356] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:12.202 [2024-12-15 05:34:25.151361] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:12.202 [2024-12-15 05:34:25.151838] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.202 [2024-12-15 05:34:25.289912] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:12.202 [2024-12-15 05:34:25.298067] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:34:12.202 null0 00:34:12.202 [2024-12-15 05:34:25.330075] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=501867 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 501867 /tmp/host.sock 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 501867 ']' 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:34:12.202 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.202 [2024-12-15 05:34:25.399930] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:34:12.202 [2024-12-15 05:34:25.399974] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid501867 ] 00:34:12.202 [2024-12-15 05:34:25.474275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:12.202 [2024-12-15 05:34:25.497251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.202 05:34:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.139 [2024-12-15 05:34:26.686144] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:13.139 [2024-12-15 05:34:26.686167] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:13.139 [2024-12-15 05:34:26.686185] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:13.139 [2024-12-15 05:34:26.772434] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:34:13.398 [2024-12-15 05:34:26.948407] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:34:13.398 [2024-12-15 05:34:26.949180] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1195b50:1 started. 00:34:13.398 [2024-12-15 05:34:26.950483] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:13.398 [2024-12-15 05:34:26.950522] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:13.398 [2024-12-15 05:34:26.950541] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:13.398 [2024-12-15 05:34:26.950553] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:34:13.398 [2024-12-15 05:34:26.950569] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:13.398 05:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.398 05:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:34:13.398 05:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:13.398 [2024-12-15 05:34:26.955486] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1195b50 was disconnected and freed. delete nvme_qpair. 00:34:13.398 05:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.398 05:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:13.398 05:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.398 05:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:13.398 05:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.398 05:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:13.398 05:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.398 05:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:34:13.398 05:34:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:34:13.398 05:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:34:13.657 05:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:34:13.657 05:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:13.657 05:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:13.657 05:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:13.657 05:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:13.657 05:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:13.658 05:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:13.658 05:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:13.658 05:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:13.658 05:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:13.658 05:34:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:14.595 05:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:14.595 05:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:14.595 05:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:14.595 05:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:14.595 05:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:14.595 05:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:14.595 05:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:14.595 05:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:14.595 05:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:14.595 05:34:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:15.532 05:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:15.532 05:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:15.532 05:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:15.532 05:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:15.532 05:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:15.532 05:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:15.532 05:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:15.791 05:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:15.791 05:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:15.791 05:34:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:16.727 05:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:16.727 05:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:16.727 05:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:16.727 05:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:16.727 05:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:16.727 05:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:16.727 05:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:16.727 05:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:16.727 05:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:16.727 05:34:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:17.665 05:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:17.665 05:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:17.665 05:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:17.665 05:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:17.665 05:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:17.665 05:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:17.665 05:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:17.665 05:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:17.665 05:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:17.665 05:34:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:19.043 05:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:19.043 05:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.043 05:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:19.043 05:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:19.043 05:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.043 05:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:19.043 05:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:19.043 05:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.043 [2024-12-15 05:34:32.392032] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:34:19.043 [2024-12-15 05:34:32.392082] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.043 [2024-12-15 05:34:32.392099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.043 [2024-12-15 05:34:32.392112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.043 [2024-12-15 05:34:32.392124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.043 [2024-12-15 05:34:32.392135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.043 [2024-12-15 05:34:32.392146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.043 [2024-12-15 05:34:32.392157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.043 [2024-12-15 05:34:32.392169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.043 [2024-12-15 05:34:32.392180] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:19.043 [2024-12-15 05:34:32.392190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:19.043 [2024-12-15 05:34:32.392200] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172290 is same with the state(6) to be set 00:34:19.043 05:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:19.043 05:34:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:19.043 [2024-12-15 05:34:32.402054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1172290 (9): Bad file descriptor 00:34:19.043 [2024-12-15 05:34:32.412089] bdev_nvme.c:2550:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:34:19.043 [2024-12-15 05:34:32.412103] bdev_nvme.c:2538:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:34:19.043 [2024-12-15 05:34:32.412110] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:19.043 [2024-12-15 05:34:32.412115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:19.043 [2024-12-15 05:34:32.412138] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:19.981 05:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:19.981 05:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:19.981 05:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:19.981 05:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:19.981 05:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:19.981 05:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:19.981 05:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:19.981 [2024-12-15 05:34:33.430042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:34:19.981 [2024-12-15 05:34:33.430119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1172290 with addr=10.0.0.2, port=4420 00:34:19.981 [2024-12-15 05:34:33.430154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1172290 is same with the state(6) to be set 00:34:19.981 [2024-12-15 05:34:33.430211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1172290 (9): Bad file descriptor 00:34:19.981 [2024-12-15 05:34:33.431166] bdev_nvme.c:3173:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:34:19.981 [2024-12-15 05:34:33.431230] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:19.981 [2024-12-15 05:34:33.431254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:19.981 [2024-12-15 05:34:33.431278] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:19.981 [2024-12-15 05:34:33.431299] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:19.981 [2024-12-15 05:34:33.431316] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:19.981 [2024-12-15 05:34:33.431329] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:19.981 [2024-12-15 05:34:33.431352] bdev_nvme.c:2134:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:34:19.981 [2024-12-15 05:34:33.431367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:34:19.981 05:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:19.981 05:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:34:19.981 05:34:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:20.917 [2024-12-15 05:34:34.433880] bdev_nvme.c:2522:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:34:20.917 [2024-12-15 05:34:34.433900] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:34:20.917 [2024-12-15 05:34:34.433912] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:34:20.917 [2024-12-15 05:34:34.433919] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:34:20.917 [2024-12-15 05:34:34.433927] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:34:20.917 [2024-12-15 05:34:34.433934] bdev_nvme.c:2512:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:34:20.917 [2024-12-15 05:34:34.433938] bdev_nvme.c:2279:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:34:20.917 [2024-12-15 05:34:34.433943] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:34:20.917 [2024-12-15 05:34:34.433963] bdev_nvme.c:7267:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:34:20.917 [2024-12-15 05:34:34.433989] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:34:20.917 [2024-12-15 05:34:34.434003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.917 [2024-12-15 05:34:34.434023] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:34:20.917 [2024-12-15 05:34:34.434030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.917 [2024-12-15 05:34:34.434037] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:34:20.917 [2024-12-15 05:34:34.434044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.917 [2024-12-15 05:34:34.434051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:34:20.917 [2024-12-15 05:34:34.434057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.917 [2024-12-15 05:34:34.434065] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:34:20.917 [2024-12-15 05:34:34.434071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:34:20.917 [2024-12-15 05:34:34.434078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:34:20.917 [2024-12-15 05:34:34.434322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11619e0 (9): Bad file descriptor 00:34:20.917 [2024-12-15 05:34:34.435333] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:34:20.917 [2024-12-15 05:34:34.435344] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:34:20.917 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:20.917 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:20.918 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:21.177 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:21.177 05:34:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:22.118 05:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:22.118 05:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:22.118 05:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:22.118 05:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:22.118 05:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:22.118 05:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:22.118 05:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:22.118 05:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:22.118 05:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:22.118 05:34:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:23.056 [2024-12-15 05:34:36.492145] bdev_nvme.c:7516:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:34:23.056 [2024-12-15 05:34:36.492161] bdev_nvme.c:7602:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:34:23.056 [2024-12-15 05:34:36.492175] bdev_nvme.c:7479:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:34:23.056 [2024-12-15 05:34:36.578424] bdev_nvme.c:7445:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:34:23.056 05:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:23.056 05:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:23.056 05:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:23.056 05:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:23.056 05:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:23.056 05:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:23.056 05:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:23.056 05:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:23.056 [2024-12-15 05:34:36.713196] bdev_nvme.c:5663:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:34:23.056 [2024-12-15 05:34:36.713651] bdev_nvme.c:1990:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1174540:1 started. 00:34:23.056 [2024-12-15 05:34:36.714668] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:34:23.056 [2024-12-15 05:34:36.714698] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:34:23.056 [2024-12-15 05:34:36.714715] bdev_nvme.c:8312:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:34:23.056 [2024-12-15 05:34:36.714727] bdev_nvme.c:7335:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:34:23.056 [2024-12-15 05:34:36.714734] bdev_nvme.c:7294:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:34:23.056 [2024-12-15 05:34:36.720247] bdev_nvme.c:1792:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1174540 was disconnected and freed. delete nvme_qpair. 00:34:23.056 05:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:34:23.056 05:34:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 501867 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 501867 ']' 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 501867 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501867 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501867' 00:34:24.434 killing process with pid 501867 00:34:24.434 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 501867 00:34:24.435 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 501867 00:34:24.435 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:34:24.435 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:24.435 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # sync 00:34:24.435 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:24.435 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set +e 00:34:24.435 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:24.435 05:34:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:24.435 rmmod nvme_tcp 00:34:24.435 rmmod nvme_fabrics 00:34:24.435 rmmod nvme_keyring 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@128 -- # set -e 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # return 0 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@517 -- # '[' -n 501816 ']' 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@518 -- # killprocess 501816 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 501816 ']' 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 501816 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 501816 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 501816' 00:34:24.435 killing process with pid 501816 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 501816 00:34:24.435 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 501816 00:34:24.694 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:24.694 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:24.694 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:24.694 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # iptr 00:34:24.694 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-save 00:34:24.694 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:24.694 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@791 -- # iptables-restore 00:34:24.694 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:24.694 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:24.694 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:24.694 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:24.694 05:34:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:27.230 00:34:27.230 real 0m21.440s 00:34:27.230 user 0m26.698s 00:34:27.230 sys 0m5.877s 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:34:27.230 ************************************ 00:34:27.230 END TEST nvmf_discovery_remove_ifc 00:34:27.230 ************************************ 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:27.230 ************************************ 00:34:27.230 START TEST nvmf_identify_kernel_target 00:34:27.230 ************************************ 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:34:27.230 * Looking for test storage... 00:34:27.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lcov --version 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:27.230 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:27.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.231 --rc genhtml_branch_coverage=1 00:34:27.231 --rc genhtml_function_coverage=1 00:34:27.231 --rc genhtml_legend=1 00:34:27.231 --rc geninfo_all_blocks=1 00:34:27.231 --rc geninfo_unexecuted_blocks=1 00:34:27.231 00:34:27.231 ' 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:27.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.231 --rc genhtml_branch_coverage=1 00:34:27.231 --rc genhtml_function_coverage=1 00:34:27.231 --rc genhtml_legend=1 00:34:27.231 --rc geninfo_all_blocks=1 00:34:27.231 --rc geninfo_unexecuted_blocks=1 00:34:27.231 00:34:27.231 ' 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:27.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.231 --rc genhtml_branch_coverage=1 00:34:27.231 --rc genhtml_function_coverage=1 00:34:27.231 --rc genhtml_legend=1 00:34:27.231 --rc geninfo_all_blocks=1 00:34:27.231 --rc geninfo_unexecuted_blocks=1 00:34:27.231 00:34:27.231 ' 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:27.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:27.231 --rc genhtml_branch_coverage=1 00:34:27.231 --rc genhtml_function_coverage=1 00:34:27.231 --rc genhtml_legend=1 00:34:27.231 --rc geninfo_all_blocks=1 00:34:27.231 --rc geninfo_unexecuted_blocks=1 00:34:27.231 00:34:27.231 ' 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # : 0 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:27.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:27.231 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:27.232 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:27.232 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:27.232 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:27.232 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # xtrace_disable 00:34:27.232 05:34:40 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # pci_devs=() 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # net_devs=() 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # e810=() 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # local -ga e810 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # x722=() 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # local -ga x722 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # mlx=() 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@322 -- # local -ga mlx 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:32.507 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:32.507 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:32.507 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:32.508 Found net devices under 0000:af:00.0: cvl_0_0 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:32.508 Found net devices under 0000:af:00.1: cvl_0_1 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@442 -- # is_hw=yes 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:32.508 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:32.767 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:32.767 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.288 ms 00:34:32.767 00:34:32.767 --- 10.0.0.2 ping statistics --- 00:34:32.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.767 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:32.767 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:32.767 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:34:32.767 00:34:32.767 --- 10.0.0.1 ping statistics --- 00:34:32.767 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:32.767 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # return 0 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:32.767 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@769 -- # local ip 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # local block nvme 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:33.026 05:34:46 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:35.564 Waiting for block devices as requested 00:34:35.564 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:35.823 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:35.823 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:36.083 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:36.083 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:36.083 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:36.083 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:36.342 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:36.342 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:36.342 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:36.600 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:36.600 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:36.600 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:36.600 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:36.858 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:36.858 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:36.858 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:36.858 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:37.116 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:37.116 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:37.116 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:37.116 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:37.116 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:37.116 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:37.116 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:37.117 No valid GPT data, bailing 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # echo 1 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@697 -- # echo 1 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@700 -- # echo tcp 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@701 -- # echo 4420 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@702 -- # echo ipv4 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:37.117 00:34:37.117 Discovery Log Number of Records 2, Generation counter 2 00:34:37.117 =====Discovery Log Entry 0====== 00:34:37.117 trtype: tcp 00:34:37.117 adrfam: ipv4 00:34:37.117 subtype: current discovery subsystem 00:34:37.117 treq: not specified, sq flow control disable supported 00:34:37.117 portid: 1 00:34:37.117 trsvcid: 4420 00:34:37.117 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:37.117 traddr: 10.0.0.1 00:34:37.117 eflags: none 00:34:37.117 sectype: none 00:34:37.117 =====Discovery Log Entry 1====== 00:34:37.117 trtype: tcp 00:34:37.117 adrfam: ipv4 00:34:37.117 subtype: nvme subsystem 00:34:37.117 treq: not specified, sq flow control disable supported 00:34:37.117 portid: 1 00:34:37.117 trsvcid: 4420 00:34:37.117 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:37.117 traddr: 10.0.0.1 00:34:37.117 eflags: none 00:34:37.117 sectype: none 00:34:37.117 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:34:37.117 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:34:37.376 ===================================================== 00:34:37.376 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:34:37.376 ===================================================== 00:34:37.376 Controller Capabilities/Features 00:34:37.376 ================================ 00:34:37.376 Vendor ID: 0000 00:34:37.376 Subsystem Vendor ID: 0000 00:34:37.376 Serial Number: ac50f348aaa585721df8 00:34:37.376 Model Number: Linux 00:34:37.376 Firmware Version: 6.8.9-20 00:34:37.376 Recommended Arb Burst: 0 00:34:37.376 IEEE OUI Identifier: 00 00 00 00:34:37.376 Multi-path I/O 00:34:37.376 May have multiple subsystem ports: No 00:34:37.376 May have multiple controllers: No 00:34:37.376 Associated with SR-IOV VF: No 00:34:37.376 Max Data Transfer Size: Unlimited 00:34:37.376 Max Number of Namespaces: 0 00:34:37.376 Max Number of I/O Queues: 1024 00:34:37.376 NVMe Specification Version (VS): 1.3 00:34:37.376 NVMe Specification Version (Identify): 1.3 00:34:37.376 Maximum Queue Entries: 1024 00:34:37.376 Contiguous Queues Required: No 00:34:37.376 Arbitration Mechanisms Supported 00:34:37.376 Weighted Round Robin: Not Supported 00:34:37.376 Vendor Specific: Not Supported 00:34:37.376 Reset Timeout: 7500 ms 00:34:37.376 Doorbell Stride: 4 bytes 00:34:37.376 NVM Subsystem Reset: Not Supported 00:34:37.376 Command Sets Supported 00:34:37.376 NVM Command Set: Supported 00:34:37.376 Boot Partition: Not Supported 00:34:37.376 Memory Page Size Minimum: 4096 bytes 00:34:37.376 Memory Page Size Maximum: 4096 bytes 00:34:37.376 Persistent Memory Region: Not Supported 00:34:37.376 Optional Asynchronous Events Supported 00:34:37.376 Namespace Attribute Notices: Not Supported 00:34:37.376 Firmware Activation Notices: Not Supported 00:34:37.376 ANA Change Notices: Not Supported 00:34:37.376 PLE Aggregate Log Change Notices: Not Supported 00:34:37.376 LBA Status Info Alert Notices: Not Supported 00:34:37.376 EGE Aggregate Log Change Notices: Not Supported 00:34:37.376 Normal NVM Subsystem Shutdown event: Not Supported 00:34:37.376 Zone Descriptor Change Notices: Not Supported 00:34:37.376 Discovery Log Change Notices: Supported 00:34:37.376 Controller Attributes 00:34:37.376 128-bit Host Identifier: Not Supported 00:34:37.376 Non-Operational Permissive Mode: Not Supported 00:34:37.376 NVM Sets: Not Supported 00:34:37.376 Read Recovery Levels: Not Supported 00:34:37.376 Endurance Groups: Not Supported 00:34:37.376 Predictable Latency Mode: Not Supported 00:34:37.376 Traffic Based Keep ALive: Not Supported 00:34:37.376 Namespace Granularity: Not Supported 00:34:37.376 SQ Associations: Not Supported 00:34:37.377 UUID List: Not Supported 00:34:37.377 Multi-Domain Subsystem: Not Supported 00:34:37.377 Fixed Capacity Management: Not Supported 00:34:37.377 Variable Capacity Management: Not Supported 00:34:37.377 Delete Endurance Group: Not Supported 00:34:37.377 Delete NVM Set: Not Supported 00:34:37.377 Extended LBA Formats Supported: Not Supported 00:34:37.377 Flexible Data Placement Supported: Not Supported 00:34:37.377 00:34:37.377 Controller Memory Buffer Support 00:34:37.377 ================================ 00:34:37.377 Supported: No 00:34:37.377 00:34:37.377 Persistent Memory Region Support 00:34:37.377 ================================ 00:34:37.377 Supported: No 00:34:37.377 00:34:37.377 Admin Command Set Attributes 00:34:37.377 ============================ 00:34:37.377 Security Send/Receive: Not Supported 00:34:37.377 Format NVM: Not Supported 00:34:37.377 Firmware Activate/Download: Not Supported 00:34:37.377 Namespace Management: Not Supported 00:34:37.377 Device Self-Test: Not Supported 00:34:37.377 Directives: Not Supported 00:34:37.377 NVMe-MI: Not Supported 00:34:37.377 Virtualization Management: Not Supported 00:34:37.377 Doorbell Buffer Config: Not Supported 00:34:37.377 Get LBA Status Capability: Not Supported 00:34:37.377 Command & Feature Lockdown Capability: Not Supported 00:34:37.377 Abort Command Limit: 1 00:34:37.377 Async Event Request Limit: 1 00:34:37.377 Number of Firmware Slots: N/A 00:34:37.377 Firmware Slot 1 Read-Only: N/A 00:34:37.377 Firmware Activation Without Reset: N/A 00:34:37.377 Multiple Update Detection Support: N/A 00:34:37.377 Firmware Update Granularity: No Information Provided 00:34:37.377 Per-Namespace SMART Log: No 00:34:37.377 Asymmetric Namespace Access Log Page: Not Supported 00:34:37.377 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:34:37.377 Command Effects Log Page: Not Supported 00:34:37.377 Get Log Page Extended Data: Supported 00:34:37.377 Telemetry Log Pages: Not Supported 00:34:37.377 Persistent Event Log Pages: Not Supported 00:34:37.377 Supported Log Pages Log Page: May Support 00:34:37.377 Commands Supported & Effects Log Page: Not Supported 00:34:37.377 Feature Identifiers & Effects Log Page:May Support 00:34:37.377 NVMe-MI Commands & Effects Log Page: May Support 00:34:37.377 Data Area 4 for Telemetry Log: Not Supported 00:34:37.377 Error Log Page Entries Supported: 1 00:34:37.377 Keep Alive: Not Supported 00:34:37.377 00:34:37.377 NVM Command Set Attributes 00:34:37.377 ========================== 00:34:37.377 Submission Queue Entry Size 00:34:37.377 Max: 1 00:34:37.377 Min: 1 00:34:37.377 Completion Queue Entry Size 00:34:37.377 Max: 1 00:34:37.377 Min: 1 00:34:37.377 Number of Namespaces: 0 00:34:37.377 Compare Command: Not Supported 00:34:37.377 Write Uncorrectable Command: Not Supported 00:34:37.377 Dataset Management Command: Not Supported 00:34:37.377 Write Zeroes Command: Not Supported 00:34:37.377 Set Features Save Field: Not Supported 00:34:37.377 Reservations: Not Supported 00:34:37.377 Timestamp: Not Supported 00:34:37.377 Copy: Not Supported 00:34:37.377 Volatile Write Cache: Not Present 00:34:37.377 Atomic Write Unit (Normal): 1 00:34:37.377 Atomic Write Unit (PFail): 1 00:34:37.377 Atomic Compare & Write Unit: 1 00:34:37.377 Fused Compare & Write: Not Supported 00:34:37.377 Scatter-Gather List 00:34:37.377 SGL Command Set: Supported 00:34:37.377 SGL Keyed: Not Supported 00:34:37.377 SGL Bit Bucket Descriptor: Not Supported 00:34:37.377 SGL Metadata Pointer: Not Supported 00:34:37.377 Oversized SGL: Not Supported 00:34:37.377 SGL Metadata Address: Not Supported 00:34:37.377 SGL Offset: Supported 00:34:37.377 Transport SGL Data Block: Not Supported 00:34:37.377 Replay Protected Memory Block: Not Supported 00:34:37.377 00:34:37.377 Firmware Slot Information 00:34:37.377 ========================= 00:34:37.377 Active slot: 0 00:34:37.377 00:34:37.377 00:34:37.377 Error Log 00:34:37.377 ========= 00:34:37.377 00:34:37.377 Active Namespaces 00:34:37.377 ================= 00:34:37.377 Discovery Log Page 00:34:37.377 ================== 00:34:37.377 Generation Counter: 2 00:34:37.377 Number of Records: 2 00:34:37.377 Record Format: 0 00:34:37.377 00:34:37.377 Discovery Log Entry 0 00:34:37.377 ---------------------- 00:34:37.377 Transport Type: 3 (TCP) 00:34:37.377 Address Family: 1 (IPv4) 00:34:37.377 Subsystem Type: 3 (Current Discovery Subsystem) 00:34:37.377 Entry Flags: 00:34:37.377 Duplicate Returned Information: 0 00:34:37.377 Explicit Persistent Connection Support for Discovery: 0 00:34:37.377 Transport Requirements: 00:34:37.377 Secure Channel: Not Specified 00:34:37.377 Port ID: 1 (0x0001) 00:34:37.377 Controller ID: 65535 (0xffff) 00:34:37.377 Admin Max SQ Size: 32 00:34:37.377 Transport Service Identifier: 4420 00:34:37.377 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:34:37.377 Transport Address: 10.0.0.1 00:34:37.377 Discovery Log Entry 1 00:34:37.377 ---------------------- 00:34:37.377 Transport Type: 3 (TCP) 00:34:37.377 Address Family: 1 (IPv4) 00:34:37.377 Subsystem Type: 2 (NVM Subsystem) 00:34:37.377 Entry Flags: 00:34:37.377 Duplicate Returned Information: 0 00:34:37.377 Explicit Persistent Connection Support for Discovery: 0 00:34:37.377 Transport Requirements: 00:34:37.377 Secure Channel: Not Specified 00:34:37.377 Port ID: 1 (0x0001) 00:34:37.377 Controller ID: 65535 (0xffff) 00:34:37.377 Admin Max SQ Size: 32 00:34:37.377 Transport Service Identifier: 4420 00:34:37.377 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:34:37.377 Transport Address: 10.0.0.1 00:34:37.377 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:37.377 get_feature(0x01) failed 00:34:37.377 get_feature(0x02) failed 00:34:37.377 get_feature(0x04) failed 00:34:37.377 ===================================================== 00:34:37.377 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:37.377 ===================================================== 00:34:37.377 Controller Capabilities/Features 00:34:37.377 ================================ 00:34:37.377 Vendor ID: 0000 00:34:37.377 Subsystem Vendor ID: 0000 00:34:37.377 Serial Number: c5788e7f8455c00bcee0 00:34:37.377 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:34:37.377 Firmware Version: 6.8.9-20 00:34:37.377 Recommended Arb Burst: 6 00:34:37.377 IEEE OUI Identifier: 00 00 00 00:34:37.377 Multi-path I/O 00:34:37.377 May have multiple subsystem ports: Yes 00:34:37.377 May have multiple controllers: Yes 00:34:37.377 Associated with SR-IOV VF: No 00:34:37.377 Max Data Transfer Size: Unlimited 00:34:37.377 Max Number of Namespaces: 1024 00:34:37.377 Max Number of I/O Queues: 128 00:34:37.377 NVMe Specification Version (VS): 1.3 00:34:37.377 NVMe Specification Version (Identify): 1.3 00:34:37.377 Maximum Queue Entries: 1024 00:34:37.377 Contiguous Queues Required: No 00:34:37.377 Arbitration Mechanisms Supported 00:34:37.377 Weighted Round Robin: Not Supported 00:34:37.377 Vendor Specific: Not Supported 00:34:37.377 Reset Timeout: 7500 ms 00:34:37.377 Doorbell Stride: 4 bytes 00:34:37.377 NVM Subsystem Reset: Not Supported 00:34:37.377 Command Sets Supported 00:34:37.377 NVM Command Set: Supported 00:34:37.377 Boot Partition: Not Supported 00:34:37.377 Memory Page Size Minimum: 4096 bytes 00:34:37.377 Memory Page Size Maximum: 4096 bytes 00:34:37.377 Persistent Memory Region: Not Supported 00:34:37.377 Optional Asynchronous Events Supported 00:34:37.377 Namespace Attribute Notices: Supported 00:34:37.377 Firmware Activation Notices: Not Supported 00:34:37.377 ANA Change Notices: Supported 00:34:37.377 PLE Aggregate Log Change Notices: Not Supported 00:34:37.377 LBA Status Info Alert Notices: Not Supported 00:34:37.377 EGE Aggregate Log Change Notices: Not Supported 00:34:37.377 Normal NVM Subsystem Shutdown event: Not Supported 00:34:37.377 Zone Descriptor Change Notices: Not Supported 00:34:37.377 Discovery Log Change Notices: Not Supported 00:34:37.377 Controller Attributes 00:34:37.377 128-bit Host Identifier: Supported 00:34:37.377 Non-Operational Permissive Mode: Not Supported 00:34:37.377 NVM Sets: Not Supported 00:34:37.377 Read Recovery Levels: Not Supported 00:34:37.377 Endurance Groups: Not Supported 00:34:37.377 Predictable Latency Mode: Not Supported 00:34:37.377 Traffic Based Keep ALive: Supported 00:34:37.377 Namespace Granularity: Not Supported 00:34:37.377 SQ Associations: Not Supported 00:34:37.377 UUID List: Not Supported 00:34:37.377 Multi-Domain Subsystem: Not Supported 00:34:37.377 Fixed Capacity Management: Not Supported 00:34:37.377 Variable Capacity Management: Not Supported 00:34:37.377 Delete Endurance Group: Not Supported 00:34:37.377 Delete NVM Set: Not Supported 00:34:37.377 Extended LBA Formats Supported: Not Supported 00:34:37.377 Flexible Data Placement Supported: Not Supported 00:34:37.377 00:34:37.377 Controller Memory Buffer Support 00:34:37.377 ================================ 00:34:37.377 Supported: No 00:34:37.377 00:34:37.378 Persistent Memory Region Support 00:34:37.378 ================================ 00:34:37.378 Supported: No 00:34:37.378 00:34:37.378 Admin Command Set Attributes 00:34:37.378 ============================ 00:34:37.378 Security Send/Receive: Not Supported 00:34:37.378 Format NVM: Not Supported 00:34:37.378 Firmware Activate/Download: Not Supported 00:34:37.378 Namespace Management: Not Supported 00:34:37.378 Device Self-Test: Not Supported 00:34:37.378 Directives: Not Supported 00:34:37.378 NVMe-MI: Not Supported 00:34:37.378 Virtualization Management: Not Supported 00:34:37.378 Doorbell Buffer Config: Not Supported 00:34:37.378 Get LBA Status Capability: Not Supported 00:34:37.378 Command & Feature Lockdown Capability: Not Supported 00:34:37.378 Abort Command Limit: 4 00:34:37.378 Async Event Request Limit: 4 00:34:37.378 Number of Firmware Slots: N/A 00:34:37.378 Firmware Slot 1 Read-Only: N/A 00:34:37.378 Firmware Activation Without Reset: N/A 00:34:37.378 Multiple Update Detection Support: N/A 00:34:37.378 Firmware Update Granularity: No Information Provided 00:34:37.378 Per-Namespace SMART Log: Yes 00:34:37.378 Asymmetric Namespace Access Log Page: Supported 00:34:37.378 ANA Transition Time : 10 sec 00:34:37.378 00:34:37.378 Asymmetric Namespace Access Capabilities 00:34:37.378 ANA Optimized State : Supported 00:34:37.378 ANA Non-Optimized State : Supported 00:34:37.378 ANA Inaccessible State : Supported 00:34:37.378 ANA Persistent Loss State : Supported 00:34:37.378 ANA Change State : Supported 00:34:37.378 ANAGRPID is not changed : No 00:34:37.378 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:34:37.378 00:34:37.378 ANA Group Identifier Maximum : 128 00:34:37.378 Number of ANA Group Identifiers : 128 00:34:37.378 Max Number of Allowed Namespaces : 1024 00:34:37.378 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:34:37.378 Command Effects Log Page: Supported 00:34:37.378 Get Log Page Extended Data: Supported 00:34:37.378 Telemetry Log Pages: Not Supported 00:34:37.378 Persistent Event Log Pages: Not Supported 00:34:37.378 Supported Log Pages Log Page: May Support 00:34:37.378 Commands Supported & Effects Log Page: Not Supported 00:34:37.378 Feature Identifiers & Effects Log Page:May Support 00:34:37.378 NVMe-MI Commands & Effects Log Page: May Support 00:34:37.378 Data Area 4 for Telemetry Log: Not Supported 00:34:37.378 Error Log Page Entries Supported: 128 00:34:37.378 Keep Alive: Supported 00:34:37.378 Keep Alive Granularity: 1000 ms 00:34:37.378 00:34:37.378 NVM Command Set Attributes 00:34:37.378 ========================== 00:34:37.378 Submission Queue Entry Size 00:34:37.378 Max: 64 00:34:37.378 Min: 64 00:34:37.378 Completion Queue Entry Size 00:34:37.378 Max: 16 00:34:37.378 Min: 16 00:34:37.378 Number of Namespaces: 1024 00:34:37.378 Compare Command: Not Supported 00:34:37.378 Write Uncorrectable Command: Not Supported 00:34:37.378 Dataset Management Command: Supported 00:34:37.378 Write Zeroes Command: Supported 00:34:37.378 Set Features Save Field: Not Supported 00:34:37.378 Reservations: Not Supported 00:34:37.378 Timestamp: Not Supported 00:34:37.378 Copy: Not Supported 00:34:37.378 Volatile Write Cache: Present 00:34:37.378 Atomic Write Unit (Normal): 1 00:34:37.378 Atomic Write Unit (PFail): 1 00:34:37.378 Atomic Compare & Write Unit: 1 00:34:37.378 Fused Compare & Write: Not Supported 00:34:37.378 Scatter-Gather List 00:34:37.378 SGL Command Set: Supported 00:34:37.378 SGL Keyed: Not Supported 00:34:37.378 SGL Bit Bucket Descriptor: Not Supported 00:34:37.378 SGL Metadata Pointer: Not Supported 00:34:37.378 Oversized SGL: Not Supported 00:34:37.378 SGL Metadata Address: Not Supported 00:34:37.378 SGL Offset: Supported 00:34:37.378 Transport SGL Data Block: Not Supported 00:34:37.378 Replay Protected Memory Block: Not Supported 00:34:37.378 00:34:37.378 Firmware Slot Information 00:34:37.378 ========================= 00:34:37.378 Active slot: 0 00:34:37.378 00:34:37.378 Asymmetric Namespace Access 00:34:37.378 =========================== 00:34:37.378 Change Count : 0 00:34:37.378 Number of ANA Group Descriptors : 1 00:34:37.378 ANA Group Descriptor : 0 00:34:37.378 ANA Group ID : 1 00:34:37.378 Number of NSID Values : 1 00:34:37.378 Change Count : 0 00:34:37.378 ANA State : 1 00:34:37.378 Namespace Identifier : 1 00:34:37.378 00:34:37.378 Commands Supported and Effects 00:34:37.378 ============================== 00:34:37.378 Admin Commands 00:34:37.378 -------------- 00:34:37.378 Get Log Page (02h): Supported 00:34:37.378 Identify (06h): Supported 00:34:37.378 Abort (08h): Supported 00:34:37.378 Set Features (09h): Supported 00:34:37.378 Get Features (0Ah): Supported 00:34:37.378 Asynchronous Event Request (0Ch): Supported 00:34:37.378 Keep Alive (18h): Supported 00:34:37.378 I/O Commands 00:34:37.378 ------------ 00:34:37.378 Flush (00h): Supported 00:34:37.378 Write (01h): Supported LBA-Change 00:34:37.378 Read (02h): Supported 00:34:37.378 Write Zeroes (08h): Supported LBA-Change 00:34:37.378 Dataset Management (09h): Supported 00:34:37.378 00:34:37.378 Error Log 00:34:37.378 ========= 00:34:37.378 Entry: 0 00:34:37.378 Error Count: 0x3 00:34:37.378 Submission Queue Id: 0x0 00:34:37.378 Command Id: 0x5 00:34:37.378 Phase Bit: 0 00:34:37.378 Status Code: 0x2 00:34:37.378 Status Code Type: 0x0 00:34:37.378 Do Not Retry: 1 00:34:37.378 Error Location: 0x28 00:34:37.378 LBA: 0x0 00:34:37.378 Namespace: 0x0 00:34:37.378 Vendor Log Page: 0x0 00:34:37.378 ----------- 00:34:37.378 Entry: 1 00:34:37.378 Error Count: 0x2 00:34:37.378 Submission Queue Id: 0x0 00:34:37.378 Command Id: 0x5 00:34:37.378 Phase Bit: 0 00:34:37.378 Status Code: 0x2 00:34:37.378 Status Code Type: 0x0 00:34:37.378 Do Not Retry: 1 00:34:37.378 Error Location: 0x28 00:34:37.378 LBA: 0x0 00:34:37.378 Namespace: 0x0 00:34:37.378 Vendor Log Page: 0x0 00:34:37.378 ----------- 00:34:37.378 Entry: 2 00:34:37.378 Error Count: 0x1 00:34:37.378 Submission Queue Id: 0x0 00:34:37.378 Command Id: 0x4 00:34:37.378 Phase Bit: 0 00:34:37.378 Status Code: 0x2 00:34:37.378 Status Code Type: 0x0 00:34:37.378 Do Not Retry: 1 00:34:37.378 Error Location: 0x28 00:34:37.378 LBA: 0x0 00:34:37.378 Namespace: 0x0 00:34:37.378 Vendor Log Page: 0x0 00:34:37.378 00:34:37.378 Number of Queues 00:34:37.378 ================ 00:34:37.378 Number of I/O Submission Queues: 128 00:34:37.378 Number of I/O Completion Queues: 128 00:34:37.378 00:34:37.378 ZNS Specific Controller Data 00:34:37.378 ============================ 00:34:37.378 Zone Append Size Limit: 0 00:34:37.378 00:34:37.378 00:34:37.378 Active Namespaces 00:34:37.378 ================= 00:34:37.378 get_feature(0x05) failed 00:34:37.378 Namespace ID:1 00:34:37.378 Command Set Identifier: NVM (00h) 00:34:37.378 Deallocate: Supported 00:34:37.378 Deallocated/Unwritten Error: Not Supported 00:34:37.378 Deallocated Read Value: Unknown 00:34:37.378 Deallocate in Write Zeroes: Not Supported 00:34:37.378 Deallocated Guard Field: 0xFFFF 00:34:37.378 Flush: Supported 00:34:37.378 Reservation: Not Supported 00:34:37.378 Namespace Sharing Capabilities: Multiple Controllers 00:34:37.378 Size (in LBAs): 1953525168 (931GiB) 00:34:37.378 Capacity (in LBAs): 1953525168 (931GiB) 00:34:37.378 Utilization (in LBAs): 1953525168 (931GiB) 00:34:37.378 UUID: c3fb1f46-98fb-4ecf-a83c-3e1927b3920d 00:34:37.378 Thin Provisioning: Not Supported 00:34:37.378 Per-NS Atomic Units: Yes 00:34:37.378 Atomic Boundary Size (Normal): 0 00:34:37.378 Atomic Boundary Size (PFail): 0 00:34:37.378 Atomic Boundary Offset: 0 00:34:37.378 NGUID/EUI64 Never Reused: No 00:34:37.378 ANA group ID: 1 00:34:37.378 Namespace Write Protected: No 00:34:37.378 Number of LBA Formats: 1 00:34:37.378 Current LBA Format: LBA Format #00 00:34:37.378 LBA Format #00: Data Size: 512 Metadata Size: 0 00:34:37.378 00:34:37.378 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:34:37.378 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:34:37.378 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # sync 00:34:37.378 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:34:37.378 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set +e 00:34:37.378 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:34:37.378 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:34:37.378 rmmod nvme_tcp 00:34:37.378 rmmod nvme_fabrics 00:34:37.378 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:34:37.378 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@128 -- # set -e 00:34:37.378 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # return 0 00:34:37.378 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:34:37.379 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:34:37.379 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:34:37.379 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:34:37.379 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # iptr 00:34:37.379 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-save 00:34:37.379 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:34:37.379 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@791 -- # iptables-restore 00:34:37.379 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:34:37.379 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:34:37.379 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:37.379 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:37.379 05:34:50 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:39.914 05:34:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:34:39.914 05:34:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:34:39.914 05:34:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:39.914 05:34:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@714 -- # echo 0 00:34:39.914 05:34:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:39.914 05:34:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:39.914 05:34:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:39.914 05:34:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:39.914 05:34:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:34:39.914 05:34:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:34:39.914 05:34:53 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:42.447 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:42.447 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:43.384 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:43.384 00:34:43.384 real 0m16.482s 00:34:43.384 user 0m4.394s 00:34:43.384 sys 0m8.600s 00:34:43.384 05:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:43.385 05:34:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:34:43.385 ************************************ 00:34:43.385 END TEST nvmf_identify_kernel_target 00:34:43.385 ************************************ 00:34:43.385 05:34:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:43.385 05:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:34:43.385 05:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:43.385 05:34:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:34:43.385 ************************************ 00:34:43.385 START TEST nvmf_auth_host 00:34:43.385 ************************************ 00:34:43.385 05:34:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:34:43.385 * Looking for test storage... 00:34:43.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:34:43.385 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:34:43.385 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lcov --version 00:34:43.385 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:34:43.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.644 --rc genhtml_branch_coverage=1 00:34:43.644 --rc genhtml_function_coverage=1 00:34:43.644 --rc genhtml_legend=1 00:34:43.644 --rc geninfo_all_blocks=1 00:34:43.644 --rc geninfo_unexecuted_blocks=1 00:34:43.644 00:34:43.644 ' 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:34:43.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.644 --rc genhtml_branch_coverage=1 00:34:43.644 --rc genhtml_function_coverage=1 00:34:43.644 --rc genhtml_legend=1 00:34:43.644 --rc geninfo_all_blocks=1 00:34:43.644 --rc geninfo_unexecuted_blocks=1 00:34:43.644 00:34:43.644 ' 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:34:43.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.644 --rc genhtml_branch_coverage=1 00:34:43.644 --rc genhtml_function_coverage=1 00:34:43.644 --rc genhtml_legend=1 00:34:43.644 --rc geninfo_all_blocks=1 00:34:43.644 --rc geninfo_unexecuted_blocks=1 00:34:43.644 00:34:43.644 ' 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:34:43.644 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:43.644 --rc genhtml_branch_coverage=1 00:34:43.644 --rc genhtml_function_coverage=1 00:34:43.644 --rc genhtml_legend=1 00:34:43.644 --rc geninfo_all_blocks=1 00:34:43.644 --rc geninfo_unexecuted_blocks=1 00:34:43.644 00:34:43.644 ' 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:34:43.644 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # : 0 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:34:43.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@55 -- # have_pci_nics=0 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # prepare_net_devs 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # local -g is_hw=no 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@440 -- # remove_spdk_ns 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # xtrace_disable 00:34:43.645 05:34:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # pci_devs=() 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # local -a pci_devs 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # pci_net_devs=() 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # pci_drivers=() 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # local -A pci_drivers 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # net_devs=() 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@319 -- # local -ga net_devs 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # e810=() 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # local -ga e810 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # x722=() 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # local -ga x722 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # mlx=() 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@322 -- # local -ga mlx 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:50.210 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:34:50.211 Found 0000:af:00.0 (0x8086 - 0x159b) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:34:50.211 Found 0000:af:00.1 (0x8086 - 0x159b) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:34:50.211 Found net devices under 0000:af:00.0: cvl_0_0 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # [[ up == up ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:34:50.211 Found net devices under 0000:af:00.1: cvl_0_1 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@442 -- # is_hw=yes 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:34:50.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:50.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.335 ms 00:34:50.211 00:34:50.211 --- 10.0.0.2 ping statistics --- 00:34:50.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.211 rtt min/avg/max/mdev = 0.335/0.335/0.335/0.000 ms 00:34:50.211 05:35:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:34:50.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:50.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.202 ms 00:34:50.211 00:34:50.211 --- 10.0.0.1 ping statistics --- 00:34:50.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:50.211 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # return 0 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@509 -- # nvmfpid=513815 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@510 -- # waitforlisten 513815 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 513815 ']' 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.211 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=dfe09ef4f5b4103163b69e5bd5e299f2 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.i9D 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key dfe09ef4f5b4103163b69e5bd5e299f2 0 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 dfe09ef4f5b4103163b69e5bd5e299f2 0 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=dfe09ef4f5b4103163b69e5bd5e299f2 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.i9D 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.i9D 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.i9D 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=7670aeed9b42a2313c19d0f7636cc6914361469c9eb70cb7241b6e9092f51440 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.SEq 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 7670aeed9b42a2313c19d0f7636cc6914361469c9eb70cb7241b6e9092f51440 3 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 7670aeed9b42a2313c19d0f7636cc6914361469c9eb70cb7241b6e9092f51440 3 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=7670aeed9b42a2313c19d0f7636cc6914361469c9eb70cb7241b6e9092f51440 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.SEq 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.SEq 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.SEq 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=36f814ba5144185ea32932b0295aa95cdf57c1799e72646d 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.9rn 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 36f814ba5144185ea32932b0295aa95cdf57c1799e72646d 0 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 36f814ba5144185ea32932b0295aa95cdf57c1799e72646d 0 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=36f814ba5144185ea32932b0295aa95cdf57c1799e72646d 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.9rn 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.9rn 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.9rn 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=72e1b7ba35e25c9b55f0762cc6ff09f99942c51c8d3abecc 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.vx1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 72e1b7ba35e25c9b55f0762cc6ff09f99942c51c8d3abecc 2 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 72e1b7ba35e25c9b55f0762cc6ff09f99942c51c8d3abecc 2 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=72e1b7ba35e25c9b55f0762cc6ff09f99942c51c8d3abecc 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.vx1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.vx1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.vx1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=2dfa8a437f173c094618fd2fac9d14c8 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.qBf 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 2dfa8a437f173c094618fd2fac9d14c8 1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 2dfa8a437f173c094618fd2fac9d14c8 1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=2dfa8a437f173c094618fd2fac9d14c8 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.qBf 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.qBf 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.qBf 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha256 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=5b6606d1197a294f415704cedf74eaf4 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha256.XXX 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha256.sXD 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key 5b6606d1197a294f415704cedf74eaf4 1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 5b6606d1197a294f415704cedf74eaf4 1 00:34:50.212 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=5b6606d1197a294f415704cedf74eaf4 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=1 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha256.sXD 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha256.sXD 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.sXD 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha384 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=48 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 24 /dev/urandom 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=bcf4ce57fde665e606673445ed6cfad956487fbe5640c748 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha384.XXX 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha384.kto 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key bcf4ce57fde665e606673445ed6cfad956487fbe5640c748 2 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 bcf4ce57fde665e606673445ed6cfad956487fbe5640c748 2 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=bcf4ce57fde665e606673445ed6cfad956487fbe5640c748 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=2 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha384.kto 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha384.kto 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.kto 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=null 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=32 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 16 /dev/urandom 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=e3f7317e149848af0b8aa7e699fcb1f6 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-null.XXX 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-null.kiR 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key e3f7317e149848af0b8aa7e699fcb1f6 0 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 e3f7317e149848af0b8aa7e699fcb1f6 0 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=e3f7317e149848af0b8aa7e699fcb1f6 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=0 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-null.kiR 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-null.kiR 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.kiR 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@751 -- # local digest len file key 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@752 -- # local -A digests 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # digest=sha512 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@754 -- # len=64 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # xxd -p -c0 -l 32 /dev/urandom 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # key=a2fe58703aa13acef8d1d074aa7279fe6a5998ce72e8397ddcd9687542f3c601 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # mktemp -t spdk.key-sha512.XXX 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@756 -- # file=/tmp/spdk.key-sha512.jFK 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@757 -- # format_dhchap_key a2fe58703aa13acef8d1d074aa7279fe6a5998ce72e8397ddcd9687542f3c601 3 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # format_key DHHC-1 a2fe58703aa13acef8d1d074aa7279fe6a5998ce72e8397ddcd9687542f3c601 3 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # local prefix key digest 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # prefix=DHHC-1 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # key=a2fe58703aa13acef8d1d074aa7279fe6a5998ce72e8397ddcd9687542f3c601 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # digest=3 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@733 -- # python - 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@758 -- # chmod 0600 /tmp/spdk.key-sha512.jFK 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@760 -- # echo /tmp/spdk.key-sha512.jFK 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.jFK 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 513815 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 513815 ']' 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:50.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:50.213 05:35:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.i9D 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.SEq ]] 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.SEq 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.9rn 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.vx1 ]] 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vx1 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.qBf 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.sXD ]] 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.sXD 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.kto 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.kiR ]] 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.kiR 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.472 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.jFK 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # local block nvme 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:34:50.473 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@670 -- # modprobe nvmet 00:34:50.731 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:50.731 05:35:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:53.275 Waiting for block devices as requested 00:34:53.275 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:53.275 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:53.534 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:53.534 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:53.534 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:53.534 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:53.793 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:53.793 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:53.793 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:54.053 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:54.053 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:54.053 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:54.053 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:54.312 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:54.312 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:54.312 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:54.312 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:55.249 No valid GPT data, bailing 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # echo 1 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@697 -- # echo 1 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@700 -- # echo tcp 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@701 -- # echo 4420 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # echo ipv4 00:34:55.249 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:55.250 00:34:55.250 Discovery Log Number of Records 2, Generation counter 2 00:34:55.250 =====Discovery Log Entry 0====== 00:34:55.250 trtype: tcp 00:34:55.250 adrfam: ipv4 00:34:55.250 subtype: current discovery subsystem 00:34:55.250 treq: not specified, sq flow control disable supported 00:34:55.250 portid: 1 00:34:55.250 trsvcid: 4420 00:34:55.250 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:55.250 traddr: 10.0.0.1 00:34:55.250 eflags: none 00:34:55.250 sectype: none 00:34:55.250 =====Discovery Log Entry 1====== 00:34:55.250 trtype: tcp 00:34:55.250 adrfam: ipv4 00:34:55.250 subtype: nvme subsystem 00:34:55.250 treq: not specified, sq flow control disable supported 00:34:55.250 portid: 1 00:34:55.250 trsvcid: 4420 00:34:55.250 subnqn: nqn.2024-02.io.spdk:cnode0 00:34:55.250 traddr: 10.0.0.1 00:34:55.250 eflags: none 00:34:55.250 sectype: none 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.250 05:35:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.509 nvme0n1 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.509 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.510 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.769 nvme0n1 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.769 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:55.770 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.029 nvme0n1 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.029 nvme0n1 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.029 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.290 nvme0n1 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.290 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.553 05:35:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.553 nvme0n1 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:56.553 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:56.812 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:56.813 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:56.813 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:56.813 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:56.813 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:56.813 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:56.813 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:56.813 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:56.813 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:56.813 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.072 nvme0n1 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.072 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.332 nvme0n1 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.332 05:35:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.592 nvme0n1 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.592 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.851 nvme0n1 00:34:57.851 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.851 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:57.851 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.851 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:57.851 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.851 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.851 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:57.851 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:57.851 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:57.852 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.111 nvme0n1 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.111 05:35:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.679 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.680 nvme0n1 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.680 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:58.939 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.199 nvme0n1 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.199 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.459 nvme0n1 00:34:59.459 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.459 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.459 05:35:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.459 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.718 nvme0n1 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:34:59.719 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:34:59.978 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:34:59.978 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.978 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.978 nvme0n1 00:34:59.978 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:59.978 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:34:59.978 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:59.978 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:34:59.978 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:34:59.978 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:00.237 05:35:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:01.615 05:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:01.615 05:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:35:01.615 05:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:01.615 05:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:35:01.615 05:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.615 05:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:01.615 05:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:01.615 05:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:01.615 05:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.615 05:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:01.615 05:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.615 05:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.615 05:35:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.615 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.874 nvme0n1 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:01.874 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:01.875 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.443 nvme0n1 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.443 05:35:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.702 nvme0n1 00:35:02.702 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.702 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:02.702 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:02.702 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.702 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.702 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.702 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:02.703 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.270 nvme0n1 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.270 05:35:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.529 nvme0n1 00:35:03.529 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.529 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:03.529 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:03.529 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.529 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.529 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:03.789 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.358 nvme0n1 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.358 05:35:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.927 nvme0n1 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:04.927 05:35:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.495 nvme0n1 00:35:05.495 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.495 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:05.495 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.495 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:05.495 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.495 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.495 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:05.495 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:05.495 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.495 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.754 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.323 nvme0n1 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.323 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.324 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.324 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:06.324 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.324 05:35:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.891 nvme0n1 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.892 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.151 nvme0n1 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.151 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.152 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.152 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.152 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.152 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.152 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.152 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.152 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.152 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:07.152 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.152 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.411 nvme0n1 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.411 05:35:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.411 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.670 nvme0n1 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:07.670 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.671 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.930 nvme0n1 00:35:07.930 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.930 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.930 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.930 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.930 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:07.931 nvme0n1 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:07.931 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.190 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.190 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.190 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.190 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.190 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.190 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.190 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:08.190 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.191 nvme0n1 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.191 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.450 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.451 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.451 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.451 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.451 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.451 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.451 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.451 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.451 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.451 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.451 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.451 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:08.451 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.451 05:35:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.451 nvme0n1 00:35:08.451 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.451 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.451 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.451 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.451 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.451 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.710 nvme0n1 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.710 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.970 nvme0n1 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:08.970 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.230 nvme0n1 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.230 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.490 05:35:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.749 nvme0n1 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.749 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.009 nvme0n1 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.009 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.269 nvme0n1 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.269 05:35:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.528 nvme0n1 00:35:10.528 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.528 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:10.528 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:10.528 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.528 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.528 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:10.788 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.047 nvme0n1 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.047 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.048 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.048 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.048 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.048 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.048 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.048 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.048 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.048 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.048 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.048 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:11.048 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.048 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.307 nvme0n1 00:35:11.307 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.307 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.307 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.307 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.307 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.307 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.307 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.307 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.307 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.307 05:35:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.570 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.829 nvme0n1 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:11.829 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.397 nvme0n1 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.397 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.398 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.398 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.398 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.398 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.398 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.398 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.398 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.398 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.398 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:12.398 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.398 05:35:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.657 nvme0n1 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:12.657 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:12.917 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.176 nvme0n1 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.176 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:13.177 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:13.177 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:13.177 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:13.177 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:13.177 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:13.177 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:13.177 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:13.177 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:13.177 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:13.177 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:13.177 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:13.177 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.177 05:35:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.744 nvme0n1 00:35:13.744 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.744 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:13.744 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:13.744 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.744 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.744 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:13.744 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:13.744 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:13.744 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:13.744 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:13.744 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.003 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.003 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:35:14.003 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.003 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:14.003 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:14.003 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.004 05:35:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.573 nvme0n1 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:14.573 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.141 nvme0n1 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.142 05:35:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.710 nvme0n1 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:35:15.710 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:15.711 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.279 nvme0n1 00:35:16.279 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.279 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.279 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.279 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.279 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.279 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.279 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.279 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.279 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.279 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.538 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.539 05:35:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.539 nvme0n1 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.539 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.799 nvme0n1 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:16.799 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.059 nvme0n1 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.059 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.318 nvme0n1 00:35:17.318 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.318 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.318 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.318 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.319 05:35:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.578 nvme0n1 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:17.578 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.579 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.838 nvme0n1 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.838 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.096 nvme0n1 00:35:18.096 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.096 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.096 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.096 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.096 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.096 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.096 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.096 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.096 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.096 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.096 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.097 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.355 nvme0n1 00:35:18.355 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.355 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.355 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.355 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.355 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.355 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.355 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.355 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.355 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.355 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.355 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.356 05:35:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.615 nvme0n1 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.615 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.874 nvme0n1 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:18.874 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:18.875 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.134 nvme0n1 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.134 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.393 nvme0n1 00:35:19.393 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.393 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.393 05:35:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.393 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.652 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.652 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:19.652 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.652 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.652 nvme0n1 00:35:19.652 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.652 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:19.652 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:19.652 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.652 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.912 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.171 nvme0n1 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:35:20.171 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.172 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.431 nvme0n1 00:35:20.431 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.431 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:20.431 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:20.431 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.431 05:35:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:20.431 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.000 nvme0n1 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:21.000 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.001 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.260 nvme0n1 00:35:21.260 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.260 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.260 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.260 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.260 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.260 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.260 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.260 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.260 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.260 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.519 05:35:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.777 nvme0n1 00:35:21.777 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.777 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:21.777 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:21.777 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.777 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.777 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.777 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:21.777 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:21.777 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.777 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.777 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.777 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:21.777 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:21.778 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.345 nvme0n1 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.345 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:22.346 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:22.346 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:22.346 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.346 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.346 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:22.346 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.346 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:22.346 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:22.346 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:22.346 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:22.346 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.346 05:35:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.604 nvme0n1 00:35:22.604 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.604 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:22.604 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:22.604 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.604 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.604 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.604 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:22.604 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:22.604 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.604 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZGZlMDllZjRmNWI0MTAzMTYzYjY5ZTViZDVlMjk5ZjJyKeV0: 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: ]] 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NzY3MGFlZWQ5YjQyYTIzMTNjMTlkMGY3NjM2Y2M2OTE0MzYxNDY5YzllYjcwY2I3MjQxYjZlOTA5MmY1MTQ0MGrjaEs=: 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:22.863 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.432 nvme0n1 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:23.432 05:35:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.000 nvme0n1 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.000 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.001 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.001 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.001 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.001 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.001 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.001 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.001 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:24.001 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.001 05:35:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.569 nvme0n1 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:YmNmNGNlNTdmZGU2NjVlNjA2NjczNDQ1ZWQ2Y2ZhZDk1NjQ4N2ZiZTU2NDBjNzQ4FidLGQ==: 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: ]] 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZTNmNzMxN2UxNDk4NDhhZjBiOGFhN2U2OTlmY2IxZja7Ib6z: 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.569 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:24.828 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.396 nvme0n1 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:YTJmZTU4NzAzYWExM2FjZWY4ZDFkMDc0YWE3Mjc5ZmU2YTU5OThjZTcyZTgzOTdkZGNkOTY4NzU0MmYzYzYwMX5U4FA=: 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:35:25.396 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.397 05:35:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.966 nvme0n1 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.966 request: 00:35:25.966 { 00:35:25.966 "name": "nvme0", 00:35:25.966 "trtype": "tcp", 00:35:25.966 "traddr": "10.0.0.1", 00:35:25.966 "adrfam": "ipv4", 00:35:25.966 "trsvcid": "4420", 00:35:25.966 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:25.966 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:25.966 "prchk_reftag": false, 00:35:25.966 "prchk_guard": false, 00:35:25.966 "hdgst": false, 00:35:25.966 "ddgst": false, 00:35:25.966 "allow_unrecognized_csi": false, 00:35:25.966 "method": "bdev_nvme_attach_controller", 00:35:25.966 "req_id": 1 00:35:25.966 } 00:35:25.966 Got JSON-RPC error response 00:35:25.966 response: 00:35:25.966 { 00:35:25.966 "code": -5, 00:35:25.966 "message": "Input/output error" 00:35:25.966 } 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:25.966 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:25.967 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:35:25.967 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:25.967 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.226 request: 00:35:26.226 { 00:35:26.226 "name": "nvme0", 00:35:26.226 "trtype": "tcp", 00:35:26.226 "traddr": "10.0.0.1", 00:35:26.226 "adrfam": "ipv4", 00:35:26.226 "trsvcid": "4420", 00:35:26.226 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:26.226 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:26.226 "prchk_reftag": false, 00:35:26.226 "prchk_guard": false, 00:35:26.226 "hdgst": false, 00:35:26.226 "ddgst": false, 00:35:26.226 "dhchap_key": "key2", 00:35:26.226 "allow_unrecognized_csi": false, 00:35:26.226 "method": "bdev_nvme_attach_controller", 00:35:26.226 "req_id": 1 00:35:26.226 } 00:35:26.226 Got JSON-RPC error response 00:35:26.226 response: 00:35:26.226 { 00:35:26.226 "code": -5, 00:35:26.226 "message": "Input/output error" 00:35:26.226 } 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.226 request: 00:35:26.226 { 00:35:26.226 "name": "nvme0", 00:35:26.226 "trtype": "tcp", 00:35:26.226 "traddr": "10.0.0.1", 00:35:26.226 "adrfam": "ipv4", 00:35:26.226 "trsvcid": "4420", 00:35:26.226 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:35:26.226 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:35:26.226 "prchk_reftag": false, 00:35:26.226 "prchk_guard": false, 00:35:26.226 "hdgst": false, 00:35:26.226 "ddgst": false, 00:35:26.226 "dhchap_key": "key1", 00:35:26.226 "dhchap_ctrlr_key": "ckey2", 00:35:26.226 "allow_unrecognized_csi": false, 00:35:26.226 "method": "bdev_nvme_attach_controller", 00:35:26.226 "req_id": 1 00:35:26.226 } 00:35:26.226 Got JSON-RPC error response 00:35:26.226 response: 00:35:26.226 { 00:35:26.226 "code": -5, 00:35:26.226 "message": "Input/output error" 00:35:26.226 } 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # get_main_ns_ip 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:26.226 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:26.227 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:26.227 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:26.227 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:26.227 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:26.227 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:26.227 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:26.227 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.227 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.486 nvme0n1 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.486 05:35:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.486 request: 00:35:26.486 { 00:35:26.486 "name": "nvme0", 00:35:26.486 "dhchap_key": "key1", 00:35:26.486 "dhchap_ctrlr_key": "ckey2", 00:35:26.486 "method": "bdev_nvme_set_keys", 00:35:26.486 "req_id": 1 00:35:26.486 } 00:35:26.486 Got JSON-RPC error response 00:35:26.486 response: 00:35:26.486 { 00:35:26.486 "code": -13, 00:35:26.486 "message": "Permission denied" 00:35:26.486 } 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:26.486 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:26.745 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:26.745 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:26.745 05:35:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:27.682 05:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:27.682 05:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:27.682 05:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:27.682 05:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:27.682 05:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:27.682 05:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:35:27.682 05:35:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MzZmODE0YmE1MTQ0MTg1ZWEzMjkzMmIwMjk1YWE5NWNkZjU3YzE3OTllNzI2NDZkJuyGBw==: 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: ]] 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:NzJlMWI3YmEzNWUyNWM5YjU1ZjA3NjJjYzZmZjA5Zjk5OTQyYzUxYzhkM2FiZWNj8uxmXg==: 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # get_main_ns_ip 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@769 -- # local ip 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # ip_candidates=() 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@770 -- # local -A ip_candidates 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.619 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.879 nvme0n1 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MmRmYThhNDM3ZjE3M2MwOTQ2MThmZDJmYWM5ZDE0YziHcnpL: 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: ]] 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NWI2NjA2ZDExOTdhMjk0ZjQxNTcwNGNlZGY3NGVhZjS6yuF5: 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.879 request: 00:35:28.879 { 00:35:28.879 "name": "nvme0", 00:35:28.879 "dhchap_key": "key2", 00:35:28.879 "dhchap_ctrlr_key": "ckey1", 00:35:28.879 "method": "bdev_nvme_set_keys", 00:35:28.879 "req_id": 1 00:35:28.879 } 00:35:28.879 Got JSON-RPC error response 00:35:28.879 response: 00:35:28.879 { 00:35:28.879 "code": -13, 00:35:28.879 "message": "Permission denied" 00:35:28.879 } 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:28.879 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:29.139 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:35:29.139 05:35:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@516 -- # nvmfcleanup 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # sync 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set +e 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # for i in {1..20} 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:35:30.076 rmmod nvme_tcp 00:35:30.076 rmmod nvme_fabrics 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@128 -- # set -e 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # return 0 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@517 -- # '[' -n 513815 ']' 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@518 -- # killprocess 513815 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 513815 ']' 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 513815 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 513815 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 513815' 00:35:30.076 killing process with pid 513815 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 513815 00:35:30.076 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 513815 00:35:30.335 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:35:30.335 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:35:30.335 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:35:30.335 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # iptr 00:35:30.335 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-save 00:35:30.335 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:35:30.335 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@791 -- # iptables-restore 00:35:30.335 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:35:30.335 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # remove_spdk_ns 00:35:30.335 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:30.335 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:30.335 05:35:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:32.239 05:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:35:32.239 05:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:35:32.498 05:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:35:32.498 05:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:35:32.498 05:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:35:32.498 05:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@714 -- # echo 0 00:35:32.498 05:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:32.498 05:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:35:32.498 05:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:32.498 05:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:35:32.498 05:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:35:32.498 05:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:35:32.498 05:35:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:35.788 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:35.788 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:36.047 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:36.306 05:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.i9D /tmp/spdk.key-null.9rn /tmp/spdk.key-sha256.qBf /tmp/spdk.key-sha384.kto /tmp/spdk.key-sha512.jFK /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:35:36.306 05:35:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:38.844 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:35:38.844 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:35:38.844 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:35:39.102 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:35:39.102 00:35:39.102 real 0m55.727s 00:35:39.102 user 0m50.558s 00:35:39.102 sys 0m12.603s 00:35:39.102 05:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:39.102 05:35:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.102 ************************************ 00:35:39.102 END TEST nvmf_auth_host 00:35:39.102 ************************************ 00:35:39.103 05:35:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:35:39.103 05:35:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:39.103 05:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:39.103 05:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:39.103 05:35:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:35:39.103 ************************************ 00:35:39.103 START TEST nvmf_digest 00:35:39.103 ************************************ 00:35:39.103 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:35:39.362 * Looking for test storage... 00:35:39.362 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:35:39.362 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:35:39.362 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lcov --version 00:35:39.362 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:35:39.362 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:35:39.362 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:39.362 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:39.362 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:39.362 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:35:39.362 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:35:39.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.363 --rc genhtml_branch_coverage=1 00:35:39.363 --rc genhtml_function_coverage=1 00:35:39.363 --rc genhtml_legend=1 00:35:39.363 --rc geninfo_all_blocks=1 00:35:39.363 --rc geninfo_unexecuted_blocks=1 00:35:39.363 00:35:39.363 ' 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:35:39.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.363 --rc genhtml_branch_coverage=1 00:35:39.363 --rc genhtml_function_coverage=1 00:35:39.363 --rc genhtml_legend=1 00:35:39.363 --rc geninfo_all_blocks=1 00:35:39.363 --rc geninfo_unexecuted_blocks=1 00:35:39.363 00:35:39.363 ' 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:35:39.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.363 --rc genhtml_branch_coverage=1 00:35:39.363 --rc genhtml_function_coverage=1 00:35:39.363 --rc genhtml_legend=1 00:35:39.363 --rc geninfo_all_blocks=1 00:35:39.363 --rc geninfo_unexecuted_blocks=1 00:35:39.363 00:35:39.363 ' 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:35:39.363 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:39.363 --rc genhtml_branch_coverage=1 00:35:39.363 --rc genhtml_function_coverage=1 00:35:39.363 --rc genhtml_legend=1 00:35:39.363 --rc geninfo_all_blocks=1 00:35:39.363 --rc geninfo_unexecuted_blocks=1 00:35:39.363 00:35:39.363 ' 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # : 0 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:35:39.363 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@55 -- # have_pci_nics=0 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@476 -- # prepare_net_devs 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@438 -- # local -g is_hw=no 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@440 -- # remove_spdk_ns 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:35:39.363 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:35:39.364 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # xtrace_disable 00:35:39.364 05:35:52 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # pci_devs=() 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # local -a pci_devs 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # pci_net_devs=() 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # pci_drivers=() 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # local -A pci_drivers 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # net_devs=() 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@319 -- # local -ga net_devs 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # e810=() 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # local -ga e810 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # x722=() 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # local -ga x722 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # mlx=() 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@322 -- # local -ga mlx 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:35:45.937 Found 0000:af:00.0 (0x8086 - 0x159b) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:35:45.937 Found 0000:af:00.1 (0x8086 - 0x159b) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:35:45.937 Found net devices under 0000:af:00.0: cvl_0_0 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # [[ up == up ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:35:45.937 Found net devices under 0000:af:00.1: cvl_0_1 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@442 -- # is_hw=yes 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:35:45.937 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:45.937 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:35:45.937 00:35:45.937 --- 10.0.0.2 ping statistics --- 00:35:45.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.937 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:35:45.937 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:45.937 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.144 ms 00:35:45.937 00:35:45.937 --- 10.0.0.1 ping statistics --- 00:35:45.937 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:45.937 rtt min/avg/max/mdev = 0.144/0.144/0.144/0.000 ms 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # return 0 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:45.937 ************************************ 00:35:45.937 START TEST nvmf_digest_clean 00:35:45.937 ************************************ 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@509 -- # nvmfpid=527634 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@510 -- # waitforlisten 527634 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 527634 ']' 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.937 05:35:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:45.937 [2024-12-15 05:35:58.863637] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:45.937 [2024-12-15 05:35:58.863686] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:45.937 [2024-12-15 05:35:58.941783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.937 [2024-12-15 05:35:58.963038] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:45.937 [2024-12-15 05:35:58.963073] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:45.937 [2024-12-15 05:35:58.963080] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:45.937 [2024-12-15 05:35:58.963087] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:45.937 [2024-12-15 05:35:58.963092] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:45.937 [2024-12-15 05:35:58.963604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:45.938 null0 00:35:45.938 [2024-12-15 05:35:59.135054] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.938 [2024-12-15 05:35:59.159244] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=527765 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 527765 /var/tmp/bperf.sock 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 527765 ']' 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:45.938 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:45.938 [2024-12-15 05:35:59.212954] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:45.938 [2024-12-15 05:35:59.212999] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527765 ] 00:35:45.938 [2024-12-15 05:35:59.287451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.938 [2024-12-15 05:35:59.309142] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:45.938 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:46.505 nvme0n1 00:35:46.505 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:46.505 05:35:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:46.505 Running I/O for 2 seconds... 00:35:48.378 25687.00 IOPS, 100.34 MiB/s [2024-12-15T04:36:02.324Z] 25615.00 IOPS, 100.06 MiB/s 00:35:48.637 Latency(us) 00:35:48.637 [2024-12-15T04:36:02.324Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.637 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:48.637 nvme0n1 : 2.00 25641.47 100.16 0.00 0.00 4987.28 2137.72 11546.82 00:35:48.637 [2024-12-15T04:36:02.324Z] =================================================================================================================== 00:35:48.637 [2024-12-15T04:36:02.324Z] Total : 25641.47 100.16 0.00 0.00 4987.28 2137.72 11546.82 00:35:48.637 { 00:35:48.637 "results": [ 00:35:48.637 { 00:35:48.637 "job": "nvme0n1", 00:35:48.637 "core_mask": "0x2", 00:35:48.637 "workload": "randread", 00:35:48.637 "status": "finished", 00:35:48.637 "queue_depth": 128, 00:35:48.637 "io_size": 4096, 00:35:48.637 "runtime": 2.004253, 00:35:48.637 "iops": 25641.47340680044, 00:35:48.637 "mibps": 100.16200549531422, 00:35:48.637 "io_failed": 0, 00:35:48.637 "io_timeout": 0, 00:35:48.637 "avg_latency_us": 4987.283819012037, 00:35:48.637 "min_latency_us": 2137.7219047619046, 00:35:48.637 "max_latency_us": 11546.819047619048 00:35:48.637 } 00:35:48.637 ], 00:35:48.637 "core_count": 1 00:35:48.637 } 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:48.637 | select(.opcode=="crc32c") 00:35:48.637 | "\(.module_name) \(.executed)"' 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 527765 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 527765 ']' 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 527765 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:48.637 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527765 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527765' 00:35:48.896 killing process with pid 527765 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 527765 00:35:48.896 Received shutdown signal, test time was about 2.000000 seconds 00:35:48.896 00:35:48.896 Latency(us) 00:35:48.896 [2024-12-15T04:36:02.583Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:48.896 [2024-12-15T04:36:02.583Z] =================================================================================================================== 00:35:48.896 [2024-12-15T04:36:02.583Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 527765 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=528352 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 528352 /var/tmp/bperf.sock 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 528352 ']' 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:48.896 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:48.896 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:48.896 [2024-12-15 05:36:02.535275] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:48.896 [2024-12-15 05:36:02.535322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528352 ] 00:35:48.896 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:48.896 Zero copy mechanism will not be used. 00:35:49.156 [2024-12-15 05:36:02.610378] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:49.156 [2024-12-15 05:36:02.632645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:49.156 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:49.156 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:49.156 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:49.156 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:49.156 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:49.414 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:49.415 05:36:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:49.673 nvme0n1 00:35:49.673 05:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:49.673 05:36:03 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:49.673 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:49.673 Zero copy mechanism will not be used. 00:35:49.673 Running I/O for 2 seconds... 00:35:52.000 4867.00 IOPS, 608.38 MiB/s [2024-12-15T04:36:05.687Z] 5391.50 IOPS, 673.94 MiB/s 00:35:52.000 Latency(us) 00:35:52.000 [2024-12-15T04:36:05.687Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.000 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:35:52.000 nvme0n1 : 2.00 5391.38 673.92 0.00 0.00 2965.29 667.06 8550.89 00:35:52.000 [2024-12-15T04:36:05.687Z] =================================================================================================================== 00:35:52.000 [2024-12-15T04:36:05.687Z] Total : 5391.38 673.92 0.00 0.00 2965.29 667.06 8550.89 00:35:52.000 { 00:35:52.000 "results": [ 00:35:52.000 { 00:35:52.000 "job": "nvme0n1", 00:35:52.000 "core_mask": "0x2", 00:35:52.000 "workload": "randread", 00:35:52.000 "status": "finished", 00:35:52.000 "queue_depth": 16, 00:35:52.000 "io_size": 131072, 00:35:52.000 "runtime": 2.003012, 00:35:52.000 "iops": 5391.380580845247, 00:35:52.000 "mibps": 673.9225726056559, 00:35:52.000 "io_failed": 0, 00:35:52.000 "io_timeout": 0, 00:35:52.000 "avg_latency_us": 2965.2938347906993, 00:35:52.000 "min_latency_us": 667.0628571428572, 00:35:52.000 "max_latency_us": 8550.887619047619 00:35:52.000 } 00:35:52.000 ], 00:35:52.000 "core_count": 1 00:35:52.000 } 00:35:52.000 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:52.000 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:52.000 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:52.000 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:52.000 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:52.000 | select(.opcode=="crc32c") 00:35:52.000 | "\(.module_name) \(.executed)"' 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 528352 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 528352 ']' 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 528352 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528352 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528352' 00:35:52.001 killing process with pid 528352 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 528352 00:35:52.001 Received shutdown signal, test time was about 2.000000 seconds 00:35:52.001 00:35:52.001 Latency(us) 00:35:52.001 [2024-12-15T04:36:05.688Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:52.001 [2024-12-15T04:36:05.688Z] =================================================================================================================== 00:35:52.001 [2024-12-15T04:36:05.688Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:52.001 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 528352 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=528817 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 528817 /var/tmp/bperf.sock 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 528817 ']' 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:52.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:52.260 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:52.260 [2024-12-15 05:36:05.793181] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:52.261 [2024-12-15 05:36:05.793228] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid528817 ] 00:35:52.261 [2024-12-15 05:36:05.866519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:52.261 [2024-12-15 05:36:05.888839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:52.261 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:52.261 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:52.261 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:52.261 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:52.520 05:36:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:52.520 05:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:52.520 05:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:52.779 nvme0n1 00:35:52.779 05:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:52.779 05:36:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:53.038 Running I/O for 2 seconds... 00:35:54.911 28615.00 IOPS, 111.78 MiB/s [2024-12-15T04:36:08.598Z] 28723.00 IOPS, 112.20 MiB/s 00:35:54.911 Latency(us) 00:35:54.911 [2024-12-15T04:36:08.598Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.911 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:35:54.911 nvme0n1 : 2.00 28744.37 112.28 0.00 0.00 4448.06 1786.64 7521.04 00:35:54.911 [2024-12-15T04:36:08.598Z] =================================================================================================================== 00:35:54.911 [2024-12-15T04:36:08.598Z] Total : 28744.37 112.28 0.00 0.00 4448.06 1786.64 7521.04 00:35:54.911 { 00:35:54.911 "results": [ 00:35:54.911 { 00:35:54.911 "job": "nvme0n1", 00:35:54.911 "core_mask": "0x2", 00:35:54.911 "workload": "randwrite", 00:35:54.911 "status": "finished", 00:35:54.911 "queue_depth": 128, 00:35:54.911 "io_size": 4096, 00:35:54.911 "runtime": 2.002966, 00:35:54.911 "iops": 28744.372096181363, 00:35:54.911 "mibps": 112.28270350070845, 00:35:54.911 "io_failed": 0, 00:35:54.911 "io_timeout": 0, 00:35:54.911 "avg_latency_us": 4448.059631149643, 00:35:54.911 "min_latency_us": 1786.6361904761904, 00:35:54.911 "max_latency_us": 7521.03619047619 00:35:54.911 } 00:35:54.911 ], 00:35:54.911 "core_count": 1 00:35:54.911 } 00:35:54.911 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:54.911 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:54.911 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:54.911 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:54.911 | select(.opcode=="crc32c") 00:35:54.911 | "\(.module_name) \(.executed)"' 00:35:54.911 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 528817 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 528817 ']' 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 528817 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 528817 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 528817' 00:35:55.170 killing process with pid 528817 00:35:55.170 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 528817 00:35:55.170 Received shutdown signal, test time was about 2.000000 seconds 00:35:55.170 00:35:55.170 Latency(us) 00:35:55.170 [2024-12-15T04:36:08.858Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:55.171 [2024-12-15T04:36:08.858Z] =================================================================================================================== 00:35:55.171 [2024-12-15T04:36:08.858Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:55.171 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 528817 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=529873 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 529873 /var/tmp/bperf.sock 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 529873 ']' 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:55.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:55.429 05:36:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:55.429 [2024-12-15 05:36:09.019804] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:55.429 [2024-12-15 05:36:09.019852] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid529873 ] 00:35:55.429 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:55.429 Zero copy mechanism will not be used. 00:35:55.429 [2024-12-15 05:36:09.094854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:55.688 [2024-12-15 05:36:09.117235] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:55.688 05:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:55.688 05:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:35:55.688 05:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:35:55.688 05:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:35:55.688 05:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:55.946 05:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:55.946 05:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:56.205 nvme0n1 00:35:56.205 05:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:35:56.205 05:36:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:56.205 I/O size of 131072 is greater than zero copy threshold (65536). 00:35:56.205 Zero copy mechanism will not be used. 00:35:56.205 Running I/O for 2 seconds... 00:35:58.520 6376.00 IOPS, 797.00 MiB/s [2024-12-15T04:36:12.207Z] 6594.00 IOPS, 824.25 MiB/s 00:35:58.520 Latency(us) 00:35:58.520 [2024-12-15T04:36:12.207Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.520 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:35:58.520 nvme0n1 : 2.00 6593.09 824.14 0.00 0.00 2422.62 1591.59 6272.73 00:35:58.520 [2024-12-15T04:36:12.207Z] =================================================================================================================== 00:35:58.520 [2024-12-15T04:36:12.208Z] Total : 6593.09 824.14 0.00 0.00 2422.62 1591.59 6272.73 00:35:58.521 { 00:35:58.521 "results": [ 00:35:58.521 { 00:35:58.521 "job": "nvme0n1", 00:35:58.521 "core_mask": "0x2", 00:35:58.521 "workload": "randwrite", 00:35:58.521 "status": "finished", 00:35:58.521 "queue_depth": 16, 00:35:58.521 "io_size": 131072, 00:35:58.521 "runtime": 2.00346, 00:35:58.521 "iops": 6593.093947470876, 00:35:58.521 "mibps": 824.1367434338595, 00:35:58.521 "io_failed": 0, 00:35:58.521 "io_timeout": 0, 00:35:58.521 "avg_latency_us": 2422.624324973233, 00:35:58.521 "min_latency_us": 1591.5885714285714, 00:35:58.521 "max_latency_us": 6272.731428571428 00:35:58.521 } 00:35:58.521 ], 00:35:58.521 "core_count": 1 00:35:58.521 } 00:35:58.521 05:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:35:58.521 05:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:35:58.521 05:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:35:58.521 05:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:35:58.521 | select(.opcode=="crc32c") 00:35:58.521 | "\(.module_name) \(.executed)"' 00:35:58.521 05:36:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 529873 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 529873 ']' 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 529873 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 529873 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 529873' 00:35:58.521 killing process with pid 529873 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 529873 00:35:58.521 Received shutdown signal, test time was about 2.000000 seconds 00:35:58.521 00:35:58.521 Latency(us) 00:35:58.521 [2024-12-15T04:36:12.208Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:58.521 [2024-12-15T04:36:12.208Z] =================================================================================================================== 00:35:58.521 [2024-12-15T04:36:12.208Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:58.521 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 529873 00:35:58.780 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 527634 00:35:58.780 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 527634 ']' 00:35:58.780 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 527634 00:35:58.780 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:35:58.780 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:58.780 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 527634 00:35:58.780 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:58.780 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:58.780 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 527634' 00:35:58.780 killing process with pid 527634 00:35:58.780 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 527634 00:35:58.780 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 527634 00:35:58.780 00:35:58.780 real 0m13.642s 00:35:58.780 user 0m26.044s 00:35:58.780 sys 0m4.576s 00:35:58.780 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:58.780 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:35:58.780 ************************************ 00:35:58.780 END TEST nvmf_digest_clean 00:35:58.780 ************************************ 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:35:59.040 ************************************ 00:35:59.040 START TEST nvmf_digest_error 00:35:59.040 ************************************ 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@509 -- # nvmfpid=530374 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@510 -- # waitforlisten 530374 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 530374 ']' 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:59.040 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:59.040 [2024-12-15 05:36:12.580869] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:59.040 [2024-12-15 05:36:12.580907] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:59.040 [2024-12-15 05:36:12.654773] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.040 [2024-12-15 05:36:12.675719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:59.040 [2024-12-15 05:36:12.675754] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:59.040 [2024-12-15 05:36:12.675761] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:59.040 [2024-12-15 05:36:12.675767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:59.040 [2024-12-15 05:36:12.675772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:59.040 [2024-12-15 05:36:12.676285] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:59.040 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:59.299 [2024-12-15 05:36:12.764771] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:59.299 null0 00:35:59.299 [2024-12-15 05:36:12.854876] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:59.299 [2024-12-15 05:36:12.879078] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=530400 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 530400 /var/tmp/bperf.sock 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 530400 ']' 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:59.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:59.299 05:36:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:59.299 [2024-12-15 05:36:12.932847] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:35:59.300 [2024-12-15 05:36:12.932889] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid530400 ] 00:35:59.559 [2024-12-15 05:36:13.008320] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:59.559 [2024-12-15 05:36:13.030820] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:35:59.559 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:59.559 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:35:59.559 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:59.559 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:35:59.818 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:35:59.818 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:59.818 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:35:59.818 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:59.818 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:35:59.818 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:00.077 nvme0n1 00:36:00.077 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:00.077 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:00.077 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:00.077 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:00.077 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:00.077 05:36:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:00.337 Running I/O for 2 seconds... 00:36:00.337 [2024-12-15 05:36:13.853824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.853859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:20710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.853869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.862527] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.862555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.862564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.874755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.874777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.874786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.885596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.885617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:18448 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.885626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.895905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.895925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:22251 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.895933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.904750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.904770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:3987 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.904779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.914736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.914756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:21979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.914764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.924121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.924141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:6805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.924149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.932510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.932530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:15831 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.932538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.942180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.942199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:15755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.942207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.953289] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.953309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:13587 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.953317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.964644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.964663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:3549 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.964671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.973299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.973318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:25583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.973327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.984801] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.984821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:1040 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.984832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:13.996826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:13.996846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:5283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:13.996855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:14.005646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:14.005667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:14.005675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.337 [2024-12-15 05:36:14.017127] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.337 [2024-12-15 05:36:14.017147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.337 [2024-12-15 05:36:14.017155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.030040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.030061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:15842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.030070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.042621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.042642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11451 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.042651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.053774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.053793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.053801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.062481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.062501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:9455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.062509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.074357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.074377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.074385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.087089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.087112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.087120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.096768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.096786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:23911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.096793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.105457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.105476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:8335 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.105484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.115482] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.115502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.115510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.125594] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.125616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:23141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.125624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.133744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.133764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18820 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.133773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.143710] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.143731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.143738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.153088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.153107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:10694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.153116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.162160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.162179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:6532 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.162187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.173413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.173433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.173441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.180960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.180979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:9263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.180987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.190643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.190663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.190670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.202197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.202217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:20544 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.202225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.210935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.210954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:18805 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.210962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.221066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.221085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.221093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.233950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.233971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:14488 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.233979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.244924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.244944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:9206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.244952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.254061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.254086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:1964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.254094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.262643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.262662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:6711 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.262670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.272123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.272143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:21657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.272151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.598 [2024-12-15 05:36:14.280330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.598 [2024-12-15 05:36:14.280350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:20341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.598 [2024-12-15 05:36:14.280358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.858 [2024-12-15 05:36:14.290514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.858 [2024-12-15 05:36:14.290535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:13620 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.858 [2024-12-15 05:36:14.290544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.858 [2024-12-15 05:36:14.299924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.858 [2024-12-15 05:36:14.299943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:5118 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.858 [2024-12-15 05:36:14.299951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.858 [2024-12-15 05:36:14.309546] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.858 [2024-12-15 05:36:14.309567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.858 [2024-12-15 05:36:14.309575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.858 [2024-12-15 05:36:14.318087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.858 [2024-12-15 05:36:14.318106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.858 [2024-12-15 05:36:14.318114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.858 [2024-12-15 05:36:14.327185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.858 [2024-12-15 05:36:14.327205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:2199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.858 [2024-12-15 05:36:14.327212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.858 [2024-12-15 05:36:14.337231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.858 [2024-12-15 05:36:14.337253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:23316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.858 [2024-12-15 05:36:14.337261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.858 [2024-12-15 05:36:14.348556] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.858 [2024-12-15 05:36:14.348577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.858 [2024-12-15 05:36:14.348585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.858 [2024-12-15 05:36:14.358258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.858 [2024-12-15 05:36:14.358278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:20529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.858 [2024-12-15 05:36:14.358286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.366502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.366522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:24910 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.366530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.377429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.377450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.377457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.389199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.389218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:19107 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.389226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.399461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.399480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:21561 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.399488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.409562] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.409582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:2498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.409590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.418689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.418708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:11824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.418719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.427615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.427635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:6477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.427643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.438949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.438970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.438978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.450856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.450876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:6478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.450884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.463035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.463055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:15570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.463063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.474193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.474211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:5492 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.474220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.482106] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.482125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.482132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.494277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.494296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.494304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.505221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.505240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4600 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.505248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.513638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.513661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:18715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.513669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.526077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.526099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.526108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:00.859 [2024-12-15 05:36:14.536942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:00.859 [2024-12-15 05:36:14.536964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:00.859 [2024-12-15 05:36:14.536972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.119 [2024-12-15 05:36:14.547364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.119 [2024-12-15 05:36:14.547385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:22536 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.119 [2024-12-15 05:36:14.547393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.119 [2024-12-15 05:36:14.555932] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.555953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:11828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.555962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.565567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.565586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:4897 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.565594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.575408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.575428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:17419 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.575435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.584290] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.584309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:17368 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.584317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.593245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.593265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:157 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.593273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.606149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.606169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3655 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.606176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.616005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.616025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:2610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.616033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.625324] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.625344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:4781 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.625352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.634607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.634626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:5164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.634634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.643534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.643555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:5407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.643563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.652203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.652223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:5183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.652231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.661890] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.661910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:4156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.661918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.672467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.672487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.672494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.681759] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.681779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.681790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.690632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.690652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:1326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.690660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.698604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.698624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:18603 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.698632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.708312] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.708332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11690 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.708339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.718102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.718122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:4618 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.718130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.728051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.728071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:12004 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.728079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.737873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.737892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.737900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.747513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.747532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:21741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.747540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.757307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.757326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:16073 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.757334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.765550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.765569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:2111 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.765577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.775454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.775474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.775482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.784202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.784221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:17401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.784229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.120 [2024-12-15 05:36:14.794126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.120 [2024-12-15 05:36:14.794145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.120 [2024-12-15 05:36:14.794153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.121 [2024-12-15 05:36:14.803664] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.121 [2024-12-15 05:36:14.803683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:16134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.121 [2024-12-15 05:36:14.803691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.812199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.812219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9535 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.812227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.821690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.821709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:12369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.821717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 25201.00 IOPS, 98.44 MiB/s [2024-12-15T04:36:15.067Z] [2024-12-15 05:36:14.835805] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.835825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24651 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.835833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.843736] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.843755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:16883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.843767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.855665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.855686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:3429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.855694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.867732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.867752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:14181 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.867760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.879133] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.879153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:4697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.879161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.888560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.888579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:7856 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.888587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.899859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.899880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:23991 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.899888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.910421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.910447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:7321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.910456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.918828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.918847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:3207 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.918855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.929119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.929139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20438 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.929147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.941230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.941253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:22265 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.941261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.953729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.953748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:15164 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.380 [2024-12-15 05:36:14.953756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.380 [2024-12-15 05:36:14.965331] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.380 [2024-12-15 05:36:14.965350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:11283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.381 [2024-12-15 05:36:14.965359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.381 [2024-12-15 05:36:14.974135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.381 [2024-12-15 05:36:14.974154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:24303 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.381 [2024-12-15 05:36:14.974162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.381 [2024-12-15 05:36:14.986842] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.381 [2024-12-15 05:36:14.986861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5218 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.381 [2024-12-15 05:36:14.986870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.381 [2024-12-15 05:36:14.994694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.381 [2024-12-15 05:36:14.994713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:15504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.381 [2024-12-15 05:36:14.994721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.381 [2024-12-15 05:36:15.006344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.381 [2024-12-15 05:36:15.006363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15275 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.381 [2024-12-15 05:36:15.006371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.381 [2024-12-15 05:36:15.018468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.381 [2024-12-15 05:36:15.018488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:12916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.381 [2024-12-15 05:36:15.018497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.381 [2024-12-15 05:36:15.027982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.381 [2024-12-15 05:36:15.028005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:19864 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.381 [2024-12-15 05:36:15.028013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.381 [2024-12-15 05:36:15.036180] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.381 [2024-12-15 05:36:15.036200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:14792 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.381 [2024-12-15 05:36:15.036208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.381 [2024-12-15 05:36:15.047570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.381 [2024-12-15 05:36:15.047589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:7531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.381 [2024-12-15 05:36:15.047598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.381 [2024-12-15 05:36:15.056179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.381 [2024-12-15 05:36:15.056198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.381 [2024-12-15 05:36:15.056206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.068036] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.068056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:4270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.068064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.077089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.077108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.077116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.088096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.088116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:23778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.088124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.099068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.099087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:1367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.099095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.107720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.107739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:8235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.107747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.120548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.120571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.120579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.128829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.128848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.128856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.140587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.140607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:5209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.140615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.153010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.153030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:22789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.153039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.165090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.165111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10686 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.165119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.177096] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.177116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:15958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.177125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.188284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.188302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:5901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.188311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.200254] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.200274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:17570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.200283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.210860] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.210878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17200 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.640 [2024-12-15 05:36:15.210886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.640 [2024-12-15 05:36:15.219251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.640 [2024-12-15 05:36:15.219270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:22689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.641 [2024-12-15 05:36:15.219278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.641 [2024-12-15 05:36:15.231426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.641 [2024-12-15 05:36:15.231445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.641 [2024-12-15 05:36:15.231452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.641 [2024-12-15 05:36:15.243355] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.641 [2024-12-15 05:36:15.243376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:4821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.641 [2024-12-15 05:36:15.243384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.641 [2024-12-15 05:36:15.255829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.641 [2024-12-15 05:36:15.255850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:6027 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.641 [2024-12-15 05:36:15.255858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.641 [2024-12-15 05:36:15.266605] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.641 [2024-12-15 05:36:15.266625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.641 [2024-12-15 05:36:15.266633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.641 [2024-12-15 05:36:15.279631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.641 [2024-12-15 05:36:15.279651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:10530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.641 [2024-12-15 05:36:15.279659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.641 [2024-12-15 05:36:15.290637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.641 [2024-12-15 05:36:15.290656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10213 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.641 [2024-12-15 05:36:15.290664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.641 [2024-12-15 05:36:15.299946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.641 [2024-12-15 05:36:15.299966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.641 [2024-12-15 05:36:15.299973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.641 [2024-12-15 05:36:15.311370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.641 [2024-12-15 05:36:15.311390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.641 [2024-12-15 05:36:15.311402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.641 [2024-12-15 05:36:15.319618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.641 [2024-12-15 05:36:15.319638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.641 [2024-12-15 05:36:15.319645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.331850] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.331870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:19081 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.331878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.342651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.342670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20025 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.342678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.350704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.350723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22958 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.350731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.361101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.361120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:1057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.361128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.373774] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.373793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:13795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.373801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.385299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.385319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:2557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.385326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.395881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.395900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:20826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.395908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.404136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.404159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:14206 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.404167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.414980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.415004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:8076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.415012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.424910] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.424930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:913 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.424940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.433112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.433132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:9960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.433140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.444487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.444506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:5014 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.444513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.455154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.455173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:22919 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.455181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.467244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.467264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:12680 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.467272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.480026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.480046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:215 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.480054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.490974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.490997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:13619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.491006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.500282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.500301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15435 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.500309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.512235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.512254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:6241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.512262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.521285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.521304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:17169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.521311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.530185] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.530204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13838 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.530212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.540867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.540887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9405 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.540895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.552354] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.552374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:3216 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.552382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.560602] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.560621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.560629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.572193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.572212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:19153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.572220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:01.901 [2024-12-15 05:36:15.583634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:01.901 [2024-12-15 05:36:15.583654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:01.901 [2024-12-15 05:36:15.583668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.160 [2024-12-15 05:36:15.592302] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.160 [2024-12-15 05:36:15.592322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:12406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.160 [2024-12-15 05:36:15.592330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.160 [2024-12-15 05:36:15.608415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.160 [2024-12-15 05:36:15.608435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:853 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.160 [2024-12-15 05:36:15.608443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.160 [2024-12-15 05:36:15.617210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.160 [2024-12-15 05:36:15.617229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:12564 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.160 [2024-12-15 05:36:15.617237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.160 [2024-12-15 05:36:15.629483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.160 [2024-12-15 05:36:15.629503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12401 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.160 [2024-12-15 05:36:15.629511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.160 [2024-12-15 05:36:15.638155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.160 [2024-12-15 05:36:15.638174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:4706 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.160 [2024-12-15 05:36:15.638182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.160 [2024-12-15 05:36:15.650141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.160 [2024-12-15 05:36:15.650161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.650169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.661674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.661693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:10594 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.661701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.673828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.673847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.673855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.682506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.682526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:7832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.682534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.695181] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.695201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:6113 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.695209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.706909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.706929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:6453 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.706937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.719022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.719042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.719050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.730694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.730715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:3402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.730723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.738518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.738538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:217 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.738562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.749183] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.749204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:23797 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.749212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.759215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.759235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24104 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.759243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.767349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.767370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:15511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.767381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.778702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.778723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:7134 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.778731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.788464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.788483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.788491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.798548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.798567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.798575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.806980] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.807006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12946 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.807014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.819266] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.819286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:14065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.819294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 [2024-12-15 05:36:15.830409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.830430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:19604 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.830437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 24546.00 IOPS, 95.88 MiB/s [2024-12-15T04:36:15.848Z] [2024-12-15 05:36:15.838683] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1dfd990) 00:36:02.161 [2024-12-15 05:36:15.838704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:4623 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:02.161 [2024-12-15 05:36:15.838712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:02.161 00:36:02.161 Latency(us) 00:36:02.161 [2024-12-15T04:36:15.848Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:02.161 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:02.161 nvme0n1 : 2.00 24561.25 95.94 0.00 0.00 5206.25 2668.25 18849.40 00:36:02.161 [2024-12-15T04:36:15.848Z] =================================================================================================================== 00:36:02.161 [2024-12-15T04:36:15.848Z] Total : 24561.25 95.94 0.00 0.00 5206.25 2668.25 18849.40 00:36:02.161 { 00:36:02.161 "results": [ 00:36:02.161 { 00:36:02.161 "job": "nvme0n1", 00:36:02.161 "core_mask": "0x2", 00:36:02.161 "workload": "randread", 00:36:02.161 "status": "finished", 00:36:02.161 "queue_depth": 128, 00:36:02.161 "io_size": 4096, 00:36:02.161 "runtime": 2.00397, 00:36:02.161 "iops": 24561.245926835232, 00:36:02.161 "mibps": 95.94236690170013, 00:36:02.161 "io_failed": 0, 00:36:02.161 "io_timeout": 0, 00:36:02.161 "avg_latency_us": 5206.245602987558, 00:36:02.161 "min_latency_us": 2668.2514285714287, 00:36:02.161 "max_latency_us": 18849.401904761904 00:36:02.161 } 00:36:02.161 ], 00:36:02.161 "core_count": 1 00:36:02.161 } 00:36:02.420 05:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:02.420 05:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:02.420 05:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:02.420 | .driver_specific 00:36:02.420 | .nvme_error 00:36:02.420 | .status_code 00:36:02.420 | .command_transient_transport_error' 00:36:02.420 05:36:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:02.420 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:36:02.420 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 530400 00:36:02.420 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 530400 ']' 00:36:02.420 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 530400 00:36:02.420 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:02.420 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:02.420 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530400 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530400' 00:36:02.680 killing process with pid 530400 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 530400 00:36:02.680 Received shutdown signal, test time was about 2.000000 seconds 00:36:02.680 00:36:02.680 Latency(us) 00:36:02.680 [2024-12-15T04:36:16.367Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:02.680 [2024-12-15T04:36:16.367Z] =================================================================================================================== 00:36:02.680 [2024-12-15T04:36:16.367Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 530400 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=531068 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 531068 /var/tmp/bperf.sock 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 531068 ']' 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:02.680 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:02.680 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:02.680 [2024-12-15 05:36:16.307748] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:02.680 [2024-12-15 05:36:16.307796] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531068 ] 00:36:02.680 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:02.680 Zero copy mechanism will not be used. 00:36:02.939 [2024-12-15 05:36:16.378844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:02.939 [2024-12-15 05:36:16.398145] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:02.939 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:02.939 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:02.939 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:02.939 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:03.199 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:03.199 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.199 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:03.199 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.199 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:03.199 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:03.458 nvme0n1 00:36:03.458 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:03.458 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:03.458 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:03.458 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:03.458 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:03.458 05:36:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:03.458 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:03.458 Zero copy mechanism will not be used. 00:36:03.458 Running I/O for 2 seconds... 00:36:03.458 [2024-12-15 05:36:17.063253] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.458 [2024-12-15 05:36:17.063293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.458 [2024-12-15 05:36:17.063304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.458 [2024-12-15 05:36:17.068389] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.458 [2024-12-15 05:36:17.068413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.458 [2024-12-15 05:36:17.068421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.458 [2024-12-15 05:36:17.073469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.458 [2024-12-15 05:36:17.073491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.458 [2024-12-15 05:36:17.073499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.458 [2024-12-15 05:36:17.078661] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.458 [2024-12-15 05:36:17.078682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.458 [2024-12-15 05:36:17.078690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.458 [2024-12-15 05:36:17.083756] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.458 [2024-12-15 05:36:17.083777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.458 [2024-12-15 05:36:17.083784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.458 [2024-12-15 05:36:17.088880] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.458 [2024-12-15 05:36:17.088900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.458 [2024-12-15 05:36:17.088908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.458 [2024-12-15 05:36:17.094025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.459 [2024-12-15 05:36:17.094045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.459 [2024-12-15 05:36:17.094053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.459 [2024-12-15 05:36:17.099147] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.459 [2024-12-15 05:36:17.099167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.459 [2024-12-15 05:36:17.099175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.459 [2024-12-15 05:36:17.104256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.459 [2024-12-15 05:36:17.104277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.459 [2024-12-15 05:36:17.104289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.459 [2024-12-15 05:36:17.109413] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.459 [2024-12-15 05:36:17.109433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.459 [2024-12-15 05:36:17.109442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.459 [2024-12-15 05:36:17.114576] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.459 [2024-12-15 05:36:17.114596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.459 [2024-12-15 05:36:17.114605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.459 [2024-12-15 05:36:17.119732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.459 [2024-12-15 05:36:17.119753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.459 [2024-12-15 05:36:17.119760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.459 [2024-12-15 05:36:17.124870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.459 [2024-12-15 05:36:17.124892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.459 [2024-12-15 05:36:17.124900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.459 [2024-12-15 05:36:17.130039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.459 [2024-12-15 05:36:17.130060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.459 [2024-12-15 05:36:17.130067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.459 [2024-12-15 05:36:17.135218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.459 [2024-12-15 05:36:17.135238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.459 [2024-12-15 05:36:17.135246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.459 [2024-12-15 05:36:17.140425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.459 [2024-12-15 05:36:17.140447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.459 [2024-12-15 05:36:17.140455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.145676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.145699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.145707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.150845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.150871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.150879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.156062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.156083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.156090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.161156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.161178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.161185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.167065] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.167087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:13184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.167095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.172934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.172955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.172963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.178625] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.178647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.178655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.183813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.183836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.183844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.188919] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.188940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.188948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.194045] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.194067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.194075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.199245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.199266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.199274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.204534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.204555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.204563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.209692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.209713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.209721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.214870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.214892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.214900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.220000] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.220032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.220040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.225110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.225131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.225139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.230172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.230193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.230201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.235297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.235319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.235326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.240435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.240455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.240469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.720 [2024-12-15 05:36:17.245630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.720 [2024-12-15 05:36:17.245650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:22496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.720 [2024-12-15 05:36:17.245658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.250780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.250801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.250809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.255949] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.255970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.255978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.261104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.261125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.261133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.266207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.266228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.266236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.271272] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.271293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.271300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.276378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.276398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.276405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.281446] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.281466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.281474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.286543] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.286567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.286574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.291630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.291651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.291659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.296729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.296750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.296758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.301878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.301899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.301907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.307072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.307092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.307100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.312172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.312192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.312200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.317301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.317320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.317328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.322489] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.322510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.322518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.327600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.327620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.327631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.332735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.332756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.332763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.337882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.337903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.337911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.343104] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.343123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.343131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.348249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.348270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.348277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.353369] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.353389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.353397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.358491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.358510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.358518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.363631] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.363652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.363659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.368698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.368718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.368727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.373881] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.373905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.373913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.378491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.378513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.721 [2024-12-15 05:36:17.378521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.721 [2024-12-15 05:36:17.383408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.721 [2024-12-15 05:36:17.383430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.722 [2024-12-15 05:36:17.383438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.722 [2024-12-15 05:36:17.388353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.722 [2024-12-15 05:36:17.388373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.722 [2024-12-15 05:36:17.388381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.722 [2024-12-15 05:36:17.393262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.722 [2024-12-15 05:36:17.393283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.722 [2024-12-15 05:36:17.393291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.722 [2024-12-15 05:36:17.398187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.722 [2024-12-15 05:36:17.398207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.722 [2024-12-15 05:36:17.398215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.722 [2024-12-15 05:36:17.403294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.722 [2024-12-15 05:36:17.403315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.722 [2024-12-15 05:36:17.403322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.408521] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.408542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.408550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.413705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.413726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.413734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.418877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.418897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.418905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.423982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.424010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.424017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.429087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.429108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.429116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.434267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.434287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.434295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.439418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.439439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.439446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.444612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.444633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.444641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.449792] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.449812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.449820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.454900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.454920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.454927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.459999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.460019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.460030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.465103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.465124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.465132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.470221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.470242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.470250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.475346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.475367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.475375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.480485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.480505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.480513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.485604] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.485625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.485633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.490697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.490718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.490726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.495879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.495901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.495908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.500961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.500982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.500991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.506068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.506093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.506100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.511248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.511269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.983 [2024-12-15 05:36:17.511277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.983 [2024-12-15 05:36:17.516434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.983 [2024-12-15 05:36:17.516454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.516462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.521549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.521568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.521576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.526630] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.526651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.526658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.531688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.531708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.531716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.536829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.536850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.536857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.542075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.542095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.542103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.547268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.547289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.547297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.552398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.552418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.552426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.557624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.557645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.557653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.562793] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.562813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.562821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.567999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.568021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.568029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.573151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.573171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.573179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.578223] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.578242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.578250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.581025] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.581045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.581052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.586094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.586113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.586120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.591231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.591264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.591272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.596439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.596459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.596466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.601561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.601580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.601588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.606613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.606633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.606641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.611676] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.611694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.611702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.616817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.616836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.616844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.621892] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.621911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.621918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.627028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.627048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.627056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.632148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.632167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.632175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.637238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.637257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.637264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.641988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.642015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.984 [2024-12-15 05:36:17.642023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.984 [2024-12-15 05:36:17.646969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.984 [2024-12-15 05:36:17.646990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.985 [2024-12-15 05:36:17.647003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:03.985 [2024-12-15 05:36:17.651947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.985 [2024-12-15 05:36:17.651967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.985 [2024-12-15 05:36:17.651974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:03.985 [2024-12-15 05:36:17.656903] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.985 [2024-12-15 05:36:17.656923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.985 [2024-12-15 05:36:17.656932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:03.985 [2024-12-15 05:36:17.661829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.985 [2024-12-15 05:36:17.661849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.985 [2024-12-15 05:36:17.661857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:03.985 [2024-12-15 05:36:17.666887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:03.985 [2024-12-15 05:36:17.666907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:03.985 [2024-12-15 05:36:17.666915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.246 [2024-12-15 05:36:17.671891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.246 [2024-12-15 05:36:17.671913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.246 [2024-12-15 05:36:17.671921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.246 [2024-12-15 05:36:17.677877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.246 [2024-12-15 05:36:17.677898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.246 [2024-12-15 05:36:17.677910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.246 [2024-12-15 05:36:17.683699] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.246 [2024-12-15 05:36:17.683719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.246 [2024-12-15 05:36:17.683727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.246 [2024-12-15 05:36:17.690411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.246 [2024-12-15 05:36:17.690431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.246 [2024-12-15 05:36:17.690439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.246 [2024-12-15 05:36:17.697415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.246 [2024-12-15 05:36:17.697436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.246 [2024-12-15 05:36:17.697445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.246 [2024-12-15 05:36:17.704766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.246 [2024-12-15 05:36:17.704787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.246 [2024-12-15 05:36:17.704795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.246 [2024-12-15 05:36:17.711647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.246 [2024-12-15 05:36:17.711669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.246 [2024-12-15 05:36:17.711677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.246 [2024-12-15 05:36:17.718814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.246 [2024-12-15 05:36:17.718836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.246 [2024-12-15 05:36:17.718844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.246 [2024-12-15 05:36:17.726085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.246 [2024-12-15 05:36:17.726106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.246 [2024-12-15 05:36:17.726113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.246 [2024-12-15 05:36:17.733295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.246 [2024-12-15 05:36:17.733316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.246 [2024-12-15 05:36:17.733324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.246 [2024-12-15 05:36:17.741003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.246 [2024-12-15 05:36:17.741028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.246 [2024-12-15 05:36:17.741037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.246 [2024-12-15 05:36:17.747754] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.246 [2024-12-15 05:36:17.747777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.246 [2024-12-15 05:36:17.747786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.753540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.753562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.753570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.759694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.759715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.759723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.765843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.765864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.765872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.771942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.771964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.771972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.777937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.777959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.777967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.783877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.783898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.783906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.790499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.790520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.790528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.798230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.798251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.798259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.805587] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.805609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.805617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.812971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.812999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.813008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.820240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.820262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.820270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.826634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.826656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.826664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.832816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.832837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.832845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.838810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.838831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.838839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.844116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.844137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.844145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.849371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.849392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.849403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.854825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.854846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.854853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.860111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.860131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.860139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.865649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.865670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.865678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.871148] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.871168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.871176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.876461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.876481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.876489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.881739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.881759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.881767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.887024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.887043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.887051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.892245] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.892266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.892273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.897548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.897569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.897576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.902789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.902809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.902817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.908154] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.908174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.247 [2024-12-15 05:36:17.908184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.247 [2024-12-15 05:36:17.913490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.247 [2024-12-15 05:36:17.913511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.248 [2024-12-15 05:36:17.913518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.248 [2024-12-15 05:36:17.918833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.248 [2024-12-15 05:36:17.918854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.248 [2024-12-15 05:36:17.918861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.248 [2024-12-15 05:36:17.924275] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.248 [2024-12-15 05:36:17.924305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.248 [2024-12-15 05:36:17.924313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.248 [2024-12-15 05:36:17.929577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.248 [2024-12-15 05:36:17.929598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.248 [2024-12-15 05:36:17.929605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.508 [2024-12-15 05:36:17.934866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.508 [2024-12-15 05:36:17.934887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.508 [2024-12-15 05:36:17.934895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.508 [2024-12-15 05:36:17.940224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.508 [2024-12-15 05:36:17.940244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.508 [2024-12-15 05:36:17.940268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.508 [2024-12-15 05:36:17.945663] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.508 [2024-12-15 05:36:17.945683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.508 [2024-12-15 05:36:17.945691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.508 [2024-12-15 05:36:17.950964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.508 [2024-12-15 05:36:17.950985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.508 [2024-12-15 05:36:17.950998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.508 [2024-12-15 05:36:17.956206] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.508 [2024-12-15 05:36:17.956227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.508 [2024-12-15 05:36:17.956235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.508 [2024-12-15 05:36:17.961457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.508 [2024-12-15 05:36:17.961478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.508 [2024-12-15 05:36:17.961486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.508 [2024-12-15 05:36:17.966796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.508 [2024-12-15 05:36:17.966816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.508 [2024-12-15 05:36:17.966824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.508 [2024-12-15 05:36:17.972153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.508 [2024-12-15 05:36:17.972174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.508 [2024-12-15 05:36:17.972182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.508 [2024-12-15 05:36:17.977572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.508 [2024-12-15 05:36:17.977593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.508 [2024-12-15 05:36:17.977601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.508 [2024-12-15 05:36:17.982978] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.508 [2024-12-15 05:36:17.983004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.508 [2024-12-15 05:36:17.983012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.508 [2024-12-15 05:36:17.988557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.508 [2024-12-15 05:36:17.988581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.508 [2024-12-15 05:36:17.988589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.508 [2024-12-15 05:36:17.994013] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.508 [2024-12-15 05:36:17.994034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:17.994042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:17.999278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:17.999299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:17.999306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.004622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.004643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.004651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.010145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.010166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.010174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.015473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.015493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.015501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.020796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.020816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.020824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.026122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.026143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.026150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.031359] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.031379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.031387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.036730] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.036751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.036758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.042129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.042150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.042158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.047778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.047799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.047807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.053466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.053487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.053496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.509 5750.00 IOPS, 718.75 MiB/s [2024-12-15T04:36:18.196Z] [2024-12-15 05:36:18.060197] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.060218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.060226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.065611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.065632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.065639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.070789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.070809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.070817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.076059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.076079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.076087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.081340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.081361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.081372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.086610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.086630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.086638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.091865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.091885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.091893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.097256] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.097277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.097284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.102558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.102579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.102586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.107961] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.107981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.107988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.113534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.113555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.113563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.118997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.119017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.119025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.124415] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.124435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.124443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.129819] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.129840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.129848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.135012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.135032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.135040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.140311] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.509 [2024-12-15 05:36:18.140331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.509 [2024-12-15 05:36:18.140339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.509 [2024-12-15 05:36:18.145629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.510 [2024-12-15 05:36:18.145649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.510 [2024-12-15 05:36:18.145657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.510 [2024-12-15 05:36:18.150858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.510 [2024-12-15 05:36:18.150880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.510 [2024-12-15 05:36:18.150889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.510 [2024-12-15 05:36:18.156237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.510 [2024-12-15 05:36:18.156258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.510 [2024-12-15 05:36:18.156266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.510 [2024-12-15 05:36:18.161746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.510 [2024-12-15 05:36:18.161767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.510 [2024-12-15 05:36:18.161775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.510 [2024-12-15 05:36:18.167540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.510 [2024-12-15 05:36:18.167561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.510 [2024-12-15 05:36:18.167569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.510 [2024-12-15 05:36:18.172874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.510 [2024-12-15 05:36:18.172894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.510 [2024-12-15 05:36:18.172906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.510 [2024-12-15 05:36:18.178208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.510 [2024-12-15 05:36:18.178229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.510 [2024-12-15 05:36:18.178237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.510 [2024-12-15 05:36:18.183488] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.510 [2024-12-15 05:36:18.183509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.510 [2024-12-15 05:36:18.183517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.510 [2024-12-15 05:36:18.188684] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.510 [2024-12-15 05:36:18.188705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.510 [2024-12-15 05:36:18.188713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.193964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.193986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.194001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.199255] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.199277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.199286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.204494] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.204514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.204523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.209709] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.209730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.209738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.214889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.214910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.214917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.220110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.220133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.220141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.225407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.225428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.225435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.230840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.230861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.230869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.236270] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.236291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.236298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.241680] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.241703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.241711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.246852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.246874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.246883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.252001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.252020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.252028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.257174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.257195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.257202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.262295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.262316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.262323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.267654] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.267674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.267682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.272825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.272845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.272852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.278105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.278125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.278133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.283416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.283438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.283446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.288769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.771 [2024-12-15 05:36:18.288789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.771 [2024-12-15 05:36:18.288797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.771 [2024-12-15 05:36:18.294052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.294072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.294079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.299408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.299429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.299437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.304816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.304835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.304843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.310184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.310204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.310216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.315506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.315526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.315534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.320937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.320957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.320965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.326327] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.326348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.326355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.331817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.331838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.331845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.337330] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.337351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.337358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.342750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.342771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.342779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.348196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.348216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.348224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.353610] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.353631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.353639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.359026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.359049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.359057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.364463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.364483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.364491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.369728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.369748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.369756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.375167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.375187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.375195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.380555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.380576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.380583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.385809] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.385830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.385837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.391135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.391157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.391164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.396513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.396534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.396542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.401918] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.401939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.401946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.407589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.407610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.407618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.412968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.412988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.413001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.418555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.418575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.418583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.423867] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.423887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.423895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.429136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.429156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.429164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.434394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.434414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.434422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.439784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.772 [2024-12-15 05:36:18.439805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.772 [2024-12-15 05:36:18.439813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:04.772 [2024-12-15 05:36:18.445575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.773 [2024-12-15 05:36:18.445596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.773 [2024-12-15 05:36:18.445603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:04.773 [2024-12-15 05:36:18.450788] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:04.773 [2024-12-15 05:36:18.450812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:04.773 [2024-12-15 05:36:18.450820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.456473] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.456511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.456521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.462149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.462171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.462179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.467722] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.467743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.467751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.473145] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.473165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.473173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.478588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.478609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.478616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.484109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.484130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.484138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.489097] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.489118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.489126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.494321] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.494342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.494349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.499503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.499524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.499532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.504743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.504764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.504773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.510339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.510359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.510367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.515818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.515839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.515847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.521081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.521102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.521110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.526584] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.526604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.526612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.532213] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.532234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.532242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.537761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.537782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.537790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.543143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.543165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.543176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.548099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.548127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.548135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.553336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.553356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.553364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.558619] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.558639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.558647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.563908] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.033 [2024-12-15 05:36:18.563928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.033 [2024-12-15 05:36:18.563936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.033 [2024-12-15 05:36:18.570014] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.570035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.570043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.576084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.576106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.576114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.581796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.581818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.581826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.587395] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.587417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.587425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.593335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.593361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.593369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.598817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.598840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.598848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.604274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.604296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.604305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.609555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.609579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.609587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.616870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.616893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.616901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.624948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.624970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.624978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.631885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.631907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.631915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.637523] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.637545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.637553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.642931] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.642953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.642960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.648464] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.648487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.648495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.653681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.653702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.653709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.659022] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.659043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.659051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.664258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.664279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.664287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.667047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.667068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.667076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.672010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.672031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.672039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.677537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.677558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.677566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.683169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.683190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.683198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.688405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.688427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.688438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.693548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.693569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.693576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.698984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.699011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.699019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.704694] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.704714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.704722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.709826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.709847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.709855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.034 [2024-12-15 05:36:18.715019] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.034 [2024-12-15 05:36:18.715041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.034 [2024-12-15 05:36:18.715049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.720251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.720273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.720281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.725379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.725401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.725411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.730561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.730582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.730590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.735861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.735881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.735888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.741174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.741195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.741204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.746501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.746522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.746530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.751739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.751760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.751767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.756868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.756889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.756897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.762016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.762037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.762044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.767140] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.767160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.767168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.772207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.772227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.772235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.777179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.777199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.777210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.782083] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.782103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.782111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.787074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.787095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.787103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.792139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.792159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.792167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.797238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.797260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.797268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.802371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.802391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.802399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.807647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.807667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.807675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.813091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.813113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.813121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.818581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.818602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.818609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.823938] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.823963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.823971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.829210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.829231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.829238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.834555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.834576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.295 [2024-12-15 05:36:18.834583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.295 [2024-12-15 05:36:18.839833] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.295 [2024-12-15 05:36:18.839854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.839862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.845184] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.845205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.845213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.850561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.850582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.850590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.855945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.855966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.855974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.861232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.861252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.861259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.866388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.866408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.866416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.871508] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.871529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.871537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.876975] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.877003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.877011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.883258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.883280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.883287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.888824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.888845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.888853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.894010] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.894031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.894039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.899004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.899025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.899033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.903981] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.904009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.904017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.908895] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.908916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.908924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.914325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.914346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.914357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.919983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.920011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.920018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.925171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.925192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.925200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.930338] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.930359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.930366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.935553] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.935574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.935581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.940727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.940749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.940757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.945968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.945989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.946003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.951152] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.951173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.951181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.956365] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.956386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.956394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.961541] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.961565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.961573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.966659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.966679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.966687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.971807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.971828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.971837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.296 [2024-12-15 05:36:18.977033] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.296 [2024-12-15 05:36:18.977053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.296 [2024-12-15 05:36:18.977061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.556 [2024-12-15 05:36:18.982194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.556 [2024-12-15 05:36:18.982215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.556 [2024-12-15 05:36:18.982224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.556 [2024-12-15 05:36:18.987409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.556 [2024-12-15 05:36:18.987429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.556 [2024-12-15 05:36:18.987437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.556 [2024-12-15 05:36:18.992573] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.556 [2024-12-15 05:36:18.992594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.556 [2024-12-15 05:36:18.992602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.556 [2024-12-15 05:36:18.997674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.556 [2024-12-15 05:36:18.997694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.556 [2024-12-15 05:36:18.997701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.556 [2024-12-15 05:36:19.002716] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.556 [2024-12-15 05:36:19.002736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.556 [2024-12-15 05:36:19.002744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.556 [2024-12-15 05:36:19.007797] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.556 [2024-12-15 05:36:19.007817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.556 [2024-12-15 05:36:19.007825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.556 [2024-12-15 05:36:19.012905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.556 [2024-12-15 05:36:19.012925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.556 [2024-12-15 05:36:19.012933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.556 [2024-12-15 05:36:19.018049] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.556 [2024-12-15 05:36:19.018069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.557 [2024-12-15 05:36:19.018077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.557 [2024-12-15 05:36:19.023098] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.557 [2024-12-15 05:36:19.023118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.557 [2024-12-15 05:36:19.023126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.557 [2024-12-15 05:36:19.028202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.557 [2024-12-15 05:36:19.028223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.557 [2024-12-15 05:36:19.028231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.557 [2024-12-15 05:36:19.033370] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.557 [2024-12-15 05:36:19.033390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.557 [2024-12-15 05:36:19.033398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.557 [2024-12-15 05:36:19.038554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.557 [2024-12-15 05:36:19.038573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.557 [2024-12-15 05:36:19.038581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.557 [2024-12-15 05:36:19.043746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.557 [2024-12-15 05:36:19.043767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.557 [2024-12-15 05:36:19.043774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:05.557 [2024-12-15 05:36:19.048920] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.557 [2024-12-15 05:36:19.048943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.557 [2024-12-15 05:36:19.048951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:05.557 [2024-12-15 05:36:19.054057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.557 [2024-12-15 05:36:19.054078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.557 [2024-12-15 05:36:19.054085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:05.557 [2024-12-15 05:36:19.059234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x196dc50) 00:36:05.557 [2024-12-15 05:36:19.059255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:05.557 [2024-12-15 05:36:19.059263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:05.557 5783.00 IOPS, 722.88 MiB/s 00:36:05.557 Latency(us) 00:36:05.557 [2024-12-15T04:36:19.244Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:05.557 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:36:05.557 nvme0n1 : 2.00 5782.41 722.80 0.00 0.00 2764.43 631.95 7957.94 00:36:05.557 [2024-12-15T04:36:19.244Z] =================================================================================================================== 00:36:05.557 [2024-12-15T04:36:19.244Z] Total : 5782.41 722.80 0.00 0.00 2764.43 631.95 7957.94 00:36:05.557 { 00:36:05.557 "results": [ 00:36:05.557 { 00:36:05.557 "job": "nvme0n1", 00:36:05.557 "core_mask": "0x2", 00:36:05.557 "workload": "randread", 00:36:05.557 "status": "finished", 00:36:05.557 "queue_depth": 16, 00:36:05.557 "io_size": 131072, 00:36:05.557 "runtime": 2.002972, 00:36:05.557 "iops": 5782.407342688764, 00:36:05.557 "mibps": 722.8009178360956, 00:36:05.557 "io_failed": 0, 00:36:05.557 "io_timeout": 0, 00:36:05.557 "avg_latency_us": 2764.4300124166402, 00:36:05.557 "min_latency_us": 631.9542857142857, 00:36:05.557 "max_latency_us": 7957.942857142857 00:36:05.557 } 00:36:05.557 ], 00:36:05.557 "core_count": 1 00:36:05.557 } 00:36:05.557 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:05.557 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:05.557 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:05.557 | .driver_specific 00:36:05.557 | .nvme_error 00:36:05.557 | .status_code 00:36:05.557 | .command_transient_transport_error' 00:36:05.557 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 373 > 0 )) 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 531068 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 531068 ']' 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 531068 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531068 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531068' 00:36:05.816 killing process with pid 531068 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 531068 00:36:05.816 Received shutdown signal, test time was about 2.000000 seconds 00:36:05.816 00:36:05.816 Latency(us) 00:36:05.816 [2024-12-15T04:36:19.503Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:05.816 [2024-12-15T04:36:19.503Z] =================================================================================================================== 00:36:05.816 [2024-12-15T04:36:19.503Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 531068 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=531532 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 531532 /var/tmp/bperf.sock 00:36:05.816 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 531532 ']' 00:36:05.817 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:05.817 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:05.817 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:05.817 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:05.817 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:05.817 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:06.076 [2024-12-15 05:36:19.517566] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:06.076 [2024-12-15 05:36:19.517612] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid531532 ] 00:36:06.076 [2024-12-15 05:36:19.573690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:06.076 [2024-12-15 05:36:19.596477] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:06.076 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:06.076 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:06.076 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:06.076 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:06.335 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:06.335 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.335 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:06.335 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.335 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:06.335 05:36:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:06.904 nvme0n1 00:36:06.904 05:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:36:06.904 05:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:06.904 05:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:06.904 05:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:06.904 05:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:06.904 05:36:20 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:06.904 Running I/O for 2 seconds... 00:36:06.904 [2024-12-15 05:36:20.440667] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eef270 00:36:06.904 [2024-12-15 05:36:20.441622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.904 [2024-12-15 05:36:20.441650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:06.904 [2024-12-15 05:36:20.450418] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef57b0 00:36:06.904 [2024-12-15 05:36:20.451313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:22901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.904 [2024-12-15 05:36:20.451336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:06.904 [2024-12-15 05:36:20.460096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ede470 00:36:06.904 [2024-12-15 05:36:20.461120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:10733 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.904 [2024-12-15 05:36:20.461140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:06.904 [2024-12-15 05:36:20.468605] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eee190 00:36:06.904 [2024-12-15 05:36:20.469511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.904 [2024-12-15 05:36:20.469529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:36:06.904 [2024-12-15 05:36:20.480001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef8a50 00:36:06.904 [2024-12-15 05:36:20.481294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21996 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.904 [2024-12-15 05:36:20.481313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:06.904 [2024-12-15 05:36:20.487732] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eea248 00:36:06.904 [2024-12-15 05:36:20.488515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:10756 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.904 [2024-12-15 05:36:20.488533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:06.904 [2024-12-15 05:36:20.497121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef7538 00:36:06.904 [2024-12-15 05:36:20.497909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:3959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.904 [2024-12-15 05:36:20.497927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:06.904 [2024-12-15 05:36:20.506653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee1f80 00:36:06.904 [2024-12-15 05:36:20.507296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:17499 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.905 [2024-12-15 05:36:20.507314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:06.905 [2024-12-15 05:36:20.516399] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef2d80 00:36:06.905 [2024-12-15 05:36:20.517088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:7981 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.905 [2024-12-15 05:36:20.517107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:06.905 [2024-12-15 05:36:20.525927] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016edf118 00:36:06.905 [2024-12-15 05:36:20.526935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:16693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.905 [2024-12-15 05:36:20.526954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:06.905 [2024-12-15 05:36:20.534830] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef4b08 00:36:06.905 [2024-12-15 05:36:20.535779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:11389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.905 [2024-12-15 05:36:20.535798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:06.905 [2024-12-15 05:36:20.544485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef5be8 00:36:06.905 [2024-12-15 05:36:20.545518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:16551 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.905 [2024-12-15 05:36:20.545536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:06.905 [2024-12-15 05:36:20.554206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eea248 00:36:06.905 [2024-12-15 05:36:20.555386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.905 [2024-12-15 05:36:20.555404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:06.905 [2024-12-15 05:36:20.563934] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee3d08 00:36:06.905 [2024-12-15 05:36:20.565249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21983 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.905 [2024-12-15 05:36:20.565266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:06.905 [2024-12-15 05:36:20.572620] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efc998 00:36:06.905 [2024-12-15 05:36:20.573511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:21361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.905 [2024-12-15 05:36:20.573529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:06.905 [2024-12-15 05:36:20.581987] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef81e0 00:36:06.905 [2024-12-15 05:36:20.582696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:21786 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:06.905 [2024-12-15 05:36:20.582715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.591224] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef20d8 00:36:07.165 [2024-12-15 05:36:20.592191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:21510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.592210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.600602] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef9f68 00:36:07.165 [2024-12-15 05:36:20.601433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:3211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.601452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.610324] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee3060 00:36:07.165 [2024-12-15 05:36:20.611384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:7672 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.611402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.619662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee4140 00:36:07.165 [2024-12-15 05:36:20.620848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:1107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.620866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.629072] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee1f80 00:36:07.165 [2024-12-15 05:36:20.630225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:9829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.630243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.638160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eeaef0 00:36:07.165 [2024-12-15 05:36:20.638870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:24943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.638888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.647682] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef2d80 00:36:07.165 [2024-12-15 05:36:20.648744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:23267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.648767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.656943] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef6458 00:36:07.165 [2024-12-15 05:36:20.658014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.658033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.665619] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efef90 00:36:07.165 [2024-12-15 05:36:20.666874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:15079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.666893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.673606] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee84c0 00:36:07.165 [2024-12-15 05:36:20.674259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:6769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.674278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.683935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee73e0 00:36:07.165 [2024-12-15 05:36:20.684742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.684761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.693281] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef6cc8 00:36:07.165 [2024-12-15 05:36:20.694095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:16648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.694114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.702599] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef0ff8 00:36:07.165 [2024-12-15 05:36:20.703397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.703416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.711869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eeff18 00:36:07.165 [2024-12-15 05:36:20.712670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.712689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.721189] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef5378 00:36:07.165 [2024-12-15 05:36:20.722020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:15659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.722038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.165 [2024-12-15 05:36:20.730503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee1710 00:36:07.165 [2024-12-15 05:36:20.731331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:24882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.165 [2024-12-15 05:36:20.731350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.166 [2024-12-15 05:36:20.739836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee27f0 00:36:07.166 [2024-12-15 05:36:20.740652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:7660 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.166 [2024-12-15 05:36:20.740683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.166 [2024-12-15 05:36:20.749156] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef8618 00:36:07.166 [2024-12-15 05:36:20.749967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.166 [2024-12-15 05:36:20.749985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.166 [2024-12-15 05:36:20.758453] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef31b8 00:36:07.166 [2024-12-15 05:36:20.759267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.166 [2024-12-15 05:36:20.759285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.166 [2024-12-15 05:36:20.767773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef20d8 00:36:07.166 [2024-12-15 05:36:20.768595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:11648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.166 [2024-12-15 05:36:20.768613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.166 [2024-12-15 05:36:20.777267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ede038 00:36:07.166 [2024-12-15 05:36:20.777835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.166 [2024-12-15 05:36:20.777853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:07.166 [2024-12-15 05:36:20.786556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee49b0 00:36:07.166 [2024-12-15 05:36:20.787483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:22677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.166 [2024-12-15 05:36:20.787501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:07.166 [2024-12-15 05:36:20.795671] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016edf118 00:36:07.166 [2024-12-15 05:36:20.796559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:162 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.166 [2024-12-15 05:36:20.796577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:07.166 [2024-12-15 05:36:20.804716] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef7da8 00:36:07.166 [2024-12-15 05:36:20.805637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:24067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.166 [2024-12-15 05:36:20.805654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:07.166 [2024-12-15 05:36:20.814015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee8088 00:36:07.166 [2024-12-15 05:36:20.814791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:4656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.166 [2024-12-15 05:36:20.814809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.166 [2024-12-15 05:36:20.823311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ede470 00:36:07.166 [2024-12-15 05:36:20.824339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15587 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.166 [2024-12-15 05:36:20.824359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.166 [2024-12-15 05:36:20.831526] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee5220 00:36:07.166 [2024-12-15 05:36:20.832856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:21887 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.166 [2024-12-15 05:36:20.832875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:07.166 [2024-12-15 05:36:20.840043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef7970 00:36:07.166 [2024-12-15 05:36:20.840688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:10105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.166 [2024-12-15 05:36:20.840707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:07.166 [2024-12-15 05:36:20.849243] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee6fa8 00:36:07.166 [2024-12-15 05:36:20.849936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:23541 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.166 [2024-12-15 05:36:20.849955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:07.426 [2024-12-15 05:36:20.858455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee5ec8 00:36:07.426 [2024-12-15 05:36:20.859094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:24102 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.426 [2024-12-15 05:36:20.859113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:07.426 [2024-12-15 05:36:20.867503] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef6020 00:36:07.426 [2024-12-15 05:36:20.868169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:6823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.426 [2024-12-15 05:36:20.868188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:07.426 [2024-12-15 05:36:20.876557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efc560 00:36:07.426 [2024-12-15 05:36:20.877230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.426 [2024-12-15 05:36:20.877249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:07.426 [2024-12-15 05:36:20.885918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efd640 00:36:07.426 [2024-12-15 05:36:20.886541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:23504 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.426 [2024-12-15 05:36:20.886560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:07.426 [2024-12-15 05:36:20.895290] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ede038 00:36:07.426 [2024-12-15 05:36:20.896055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2661 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.426 [2024-12-15 05:36:20.896073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:07.426 [2024-12-15 05:36:20.904285] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eddc00 00:36:07.426 [2024-12-15 05:36:20.905150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:15165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.426 [2024-12-15 05:36:20.905168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:07.426 [2024-12-15 05:36:20.914328] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eeb760 00:36:07.426 [2024-12-15 05:36:20.915343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:25518 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.426 [2024-12-15 05:36:20.915361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:07.426 [2024-12-15 05:36:20.923725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee6300 00:36:07.426 [2024-12-15 05:36:20.924846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:19901 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.426 [2024-12-15 05:36:20.924864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:07.426 [2024-12-15 05:36:20.931125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee73e0 00:36:07.426 [2024-12-15 05:36:20.931750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:14168 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.426 [2024-12-15 05:36:20.931768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:36:07.426 [2024-12-15 05:36:20.940464] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef4298 00:36:07.426 [2024-12-15 05:36:20.940893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:7060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.426 [2024-12-15 05:36:20.940912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.426 [2024-12-15 05:36:20.949745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eee5c8 00:36:07.426 [2024-12-15 05:36:20.950528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:23900 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.426 [2024-12-15 05:36:20.950546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.426 [2024-12-15 05:36:20.959015] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef9f68 00:36:07.427 [2024-12-15 05:36:20.959789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:14824 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:20.959806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:20.968484] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee0ea0 00:36:07.427 [2024-12-15 05:36:20.969079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:17512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:20.969101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:20.978912] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee5a90 00:36:07.427 [2024-12-15 05:36:20.980284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:9009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:20.980302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:20.987339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee5220 00:36:07.427 [2024-12-15 05:36:20.988376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:20.988395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:20.996346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee3060 00:36:07.427 [2024-12-15 05:36:20.997381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:18092 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:20.997399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:21.005380] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee6b70 00:36:07.427 [2024-12-15 05:36:21.006385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:16555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:21.006403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:21.014494] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efcdd0 00:36:07.427 [2024-12-15 05:36:21.015502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:6854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:21.015520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:21.023587] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee6738 00:36:07.427 [2024-12-15 05:36:21.024597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:25435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:21.024615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:21.032616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee7818 00:36:07.427 [2024-12-15 05:36:21.033642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:22163 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:21.033659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:21.041693] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee99d8 00:36:07.427 [2024-12-15 05:36:21.042702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:21.042720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:21.050043] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efc128 00:36:07.427 [2024-12-15 05:36:21.051382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:9783 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:21.051399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:21.058598] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eef270 00:36:07.427 [2024-12-15 05:36:21.059224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:4285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:21.059242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:21.067076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee1b48 00:36:07.427 [2024-12-15 05:36:21.067688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:3675 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:21.067706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:21.077114] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef96f8 00:36:07.427 [2024-12-15 05:36:21.077862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11673 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:21.077880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:21.086176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eecc78 00:36:07.427 [2024-12-15 05:36:21.086941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:3737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:21.086959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:21.095267] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016edf118 00:36:07.427 [2024-12-15 05:36:21.096062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:20365 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:21.096080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:07.427 [2024-12-15 05:36:21.104575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eed920 00:36:07.427 [2024-12-15 05:36:21.105448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:24302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.427 [2024-12-15 05:36:21.105466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:36:07.687 [2024-12-15 05:36:21.114308] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee6738 00:36:07.687 [2024-12-15 05:36:21.115353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.687 [2024-12-15 05:36:21.115372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:36:07.687 [2024-12-15 05:36:21.123885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee0ea0 00:36:07.687 [2024-12-15 05:36:21.125031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:16429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.687 [2024-12-15 05:36:21.125049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:07.687 [2024-12-15 05:36:21.132886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee12d8 00:36:07.687 [2024-12-15 05:36:21.134134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:22030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.687 [2024-12-15 05:36:21.134153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:07.687 [2024-12-15 05:36:21.141361] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee3d08 00:36:07.687 [2024-12-15 05:36:21.142251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:10089 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.687 [2024-12-15 05:36:21.142269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:07.687 [2024-12-15 05:36:21.150704] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efc560 00:36:07.687 [2024-12-15 05:36:21.151393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:12446 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.687 [2024-12-15 05:36:21.151411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.687 [2024-12-15 05:36:21.159917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee6738 00:36:07.687 [2024-12-15 05:36:21.160924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:9321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.687 [2024-12-15 05:36:21.160942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.687 [2024-12-15 05:36:21.168958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eef270 00:36:07.687 [2024-12-15 05:36:21.169978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.169999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.178006] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee5ec8 00:36:07.688 [2024-12-15 05:36:21.179012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.179030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.187076] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efda78 00:36:07.688 [2024-12-15 05:36:21.188078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.188096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.196146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee1f80 00:36:07.688 [2024-12-15 05:36:21.197164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.197182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.205376] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee27f0 00:36:07.688 [2024-12-15 05:36:21.206441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:18060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.206464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.214724] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efef90 00:36:07.688 [2024-12-15 05:36:21.215794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:1904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.215813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.223936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eed0b0 00:36:07.688 [2024-12-15 05:36:21.224980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:11062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.225003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.232955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee6fa8 00:36:07.688 [2024-12-15 05:36:21.233889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:4240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.233907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.242068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eeea00 00:36:07.688 [2024-12-15 05:36:21.243085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:24183 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.243103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.250556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef7970 00:36:07.688 [2024-12-15 05:36:21.251769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:4644 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.251787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.258897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eef270 00:36:07.688 [2024-12-15 05:36:21.259543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:14898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.259562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.267955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee1710 00:36:07.688 [2024-12-15 05:36:21.268709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:20309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.268728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.277068] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eeb328 00:36:07.688 [2024-12-15 05:36:21.277616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:17905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.277633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.286409] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef3a28 00:36:07.688 [2024-12-15 05:36:21.287176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.287193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.295012] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016edf550 00:36:07.688 [2024-12-15 05:36:21.295764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:13789 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.295782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.304419] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eedd58 00:36:07.688 [2024-12-15 05:36:21.305326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:24173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.305344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.315070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee1710 00:36:07.688 [2024-12-15 05:36:21.316279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:20648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.316297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.325325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efb480 00:36:07.688 [2024-12-15 05:36:21.326912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:16951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.326930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.331903] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef9f68 00:36:07.688 [2024-12-15 05:36:21.332559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:22655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.332578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.341170] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eeff18 00:36:07.688 [2024-12-15 05:36:21.341730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:4926 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.341748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.349572] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eeaab8 00:36:07.688 [2024-12-15 05:36:21.350219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.350237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.359070] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef3e60 00:36:07.688 [2024-12-15 05:36:21.359841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:9691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.359858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:36:07.688 [2024-12-15 05:36:21.370096] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee99d8 00:36:07.688 [2024-12-15 05:36:21.371310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:7184 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.688 [2024-12-15 05:36:21.371328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.378847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eef270 00:36:07.948 [2024-12-15 05:36:21.380040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:18522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.380058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.387325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef7538 00:36:07.948 [2024-12-15 05:36:21.388006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:13307 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.388024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.398363] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eee5c8 00:36:07.948 [2024-12-15 05:36:21.399902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:16489 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.399920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.404766] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efac10 00:36:07.948 [2024-12-15 05:36:21.405454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:54 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.405473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.414242] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef6020 00:36:07.948 [2024-12-15 05:36:21.415024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:3974 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.415043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.424033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef0bc0 00:36:07.948 [2024-12-15 05:36:21.425086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:12134 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.425105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:36:07.948 27603.00 IOPS, 107.82 MiB/s [2024-12-15T04:36:21.635Z] [2024-12-15 05:36:21.433713] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef1ca0 00:36:07.948 [2024-12-15 05:36:21.434792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:19063 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.434812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.442636] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee88f8 00:36:07.948 [2024-12-15 05:36:21.443611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.443633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.451905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eed920 00:36:07.948 [2024-12-15 05:36:21.453015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6068 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.453033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.461294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee6fa8 00:36:07.948 [2024-12-15 05:36:21.461913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:8107 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.461933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.470581] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efb048 00:36:07.948 [2024-12-15 05:36:21.471458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:23765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.471476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.480762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eea248 00:36:07.948 [2024-12-15 05:36:21.482128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3410 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.482146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.488867] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee84c0 00:36:07.948 [2024-12-15 05:36:21.490157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:5897 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.490176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.496861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efc998 00:36:07.948 [2024-12-15 05:36:21.497529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:21484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.948 [2024-12-15 05:36:21.497547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:36:07.948 [2024-12-15 05:36:21.506008] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016edf550 00:36:07.949 [2024-12-15 05:36:21.506582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:3710 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.506601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:07.949 [2024-12-15 05:36:21.516575] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eeaab8 00:36:07.949 [2024-12-15 05:36:21.517392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:5126 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.517411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:07.949 [2024-12-15 05:36:21.525254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee1f80 00:36:07.949 [2024-12-15 05:36:21.526484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:17623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.526503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:07.949 [2024-12-15 05:36:21.534691] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee5a90 00:36:07.949 [2024-12-15 05:36:21.535448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:25079 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.535467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:07.949 [2024-12-15 05:36:21.543576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efb8b8 00:36:07.949 [2024-12-15 05:36:21.544578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:5787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.544597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:36:07.949 [2024-12-15 05:36:21.552413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eee190 00:36:07.949 [2024-12-15 05:36:21.553129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:13007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.553146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:36:07.949 [2024-12-15 05:36:21.561541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eea680 00:36:07.949 [2024-12-15 05:36:21.562254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:20387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.562273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:36:07.949 [2024-12-15 05:36:21.572630] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef5378 00:36:07.949 [2024-12-15 05:36:21.574183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:15435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.574201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:07.949 [2024-12-15 05:36:21.579176] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef9b30 00:36:07.949 [2024-12-15 05:36:21.579952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:2697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.579970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:36:07.949 [2024-12-15 05:36:21.588483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef8e88 00:36:07.949 [2024-12-15 05:36:21.589203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:5058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.589221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:36:07.949 [2024-12-15 05:36:21.597413] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef0350 00:36:07.949 [2024-12-15 05:36:21.598116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:13086 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.598135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:07.949 [2024-12-15 05:36:21.606650] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee3498 00:36:07.949 [2024-12-15 05:36:21.607478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:24599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.607497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:36:07.949 [2024-12-15 05:36:21.616196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efac10 00:36:07.949 [2024-12-15 05:36:21.616914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:24532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.616933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:07.949 [2024-12-15 05:36:21.625350] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef8e88 00:36:07.949 [2024-12-15 05:36:21.626055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:11945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:07.949 [2024-12-15 05:36:21.626074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:08.209 [2024-12-15 05:36:21.634771] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efd640 00:36:08.209 [2024-12-15 05:36:21.635362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:13871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.209 [2024-12-15 05:36:21.635381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.643573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee9e10 00:36:08.210 [2024-12-15 05:36:21.644564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:2878 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.644584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.652775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef4f40 00:36:08.210 [2024-12-15 05:36:21.653595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:5686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.653614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.663131] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eef6a8 00:36:08.210 [2024-12-15 05:36:21.664327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:21242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.664346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.670968] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee38d0 00:36:08.210 [2024-12-15 05:36:21.671667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:19686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.671685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.680259] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee0630 00:36:08.210 [2024-12-15 05:36:21.681173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:1611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.681195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.689489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efc998 00:36:08.210 [2024-12-15 05:36:21.690309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:13724 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.690328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.699648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efe720 00:36:08.210 [2024-12-15 05:36:21.701040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.701058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.709149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee5220 00:36:08.210 [2024-12-15 05:36:21.710587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:4746 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.710605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.715580] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eeb760 00:36:08.210 [2024-12-15 05:36:21.716159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:13952 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.716179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.724935] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee73e0 00:36:08.210 [2024-12-15 05:36:21.725518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.725538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.734260] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef35f0 00:36:08.210 [2024-12-15 05:36:21.734825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:4136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.734843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.744365] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efe720 00:36:08.210 [2024-12-15 05:36:21.745735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:2955 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.745753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.752207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eed920 00:36:08.210 [2024-12-15 05:36:21.752936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:18381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.752954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.761699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef6020 00:36:08.210 [2024-12-15 05:36:21.762551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:20497 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.762570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.771178] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef0788 00:36:08.210 [2024-12-15 05:36:21.772178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:12389 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.772196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.780186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eed0b0 00:36:08.210 [2024-12-15 05:36:21.780852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:13200 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.780870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.788530] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efb048 00:36:08.210 [2024-12-15 05:36:21.789189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:19152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.789208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.799231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efb048 00:36:08.210 [2024-12-15 05:36:21.800381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:16870 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.800399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.807092] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eec840 00:36:08.210 [2024-12-15 05:36:21.807741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:201 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.807759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.816081] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef92c0 00:36:08.210 [2024-12-15 05:36:21.816726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:18280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.816744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.825359] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee49b0 00:36:08.210 [2024-12-15 05:36:21.825867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:21209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.825885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.834850] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee99d8 00:36:08.210 [2024-12-15 05:36:21.835491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:14925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.835509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.844354] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef6020 00:36:08.210 [2024-12-15 05:36:21.845145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13631 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.845164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.852925] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ede8a8 00:36:08.210 [2024-12-15 05:36:21.854264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.210 [2024-12-15 05:36:21.854282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:08.210 [2024-12-15 05:36:21.862975] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef8a50 00:36:08.211 [2024-12-15 05:36:21.864171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:17383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.211 [2024-12-15 05:36:21.864190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:08.211 [2024-12-15 05:36:21.870472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef4f40 00:36:08.211 [2024-12-15 05:36:21.871131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.211 [2024-12-15 05:36:21.871150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:36:08.211 [2024-12-15 05:36:21.880756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eed0b0 00:36:08.211 [2024-12-15 05:36:21.881983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:3419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.211 [2024-12-15 05:36:21.882004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:36:08.211 [2024-12-15 05:36:21.890230] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efda78 00:36:08.211 [2024-12-15 05:36:21.891625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:7131 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.211 [2024-12-15 05:36:21.891653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:08.470 [2024-12-15 05:36:21.899668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef5378 00:36:08.470 [2024-12-15 05:36:21.901043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:24747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.470 [2024-12-15 05:36:21.901062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:36:08.470 [2024-12-15 05:36:21.907683] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef7970 00:36:08.470 [2024-12-15 05:36:21.908960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:6765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.470 [2024-12-15 05:36:21.908978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:36:08.470 [2024-12-15 05:36:21.915445] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee2c28 00:36:08.470 [2024-12-15 05:36:21.916175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:24165 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.470 [2024-12-15 05:36:21.916196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:36:08.470 [2024-12-15 05:36:21.926670] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eff3c8 00:36:08.470 [2024-12-15 05:36:21.927928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.470 [2024-12-15 05:36:21.927947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:36:08.470 [2024-12-15 05:36:21.936001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee7c50 00:36:08.470 [2024-12-15 05:36:21.936788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:15490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:21.936806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:21.945136] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efb048 00:36:08.471 [2024-12-15 05:36:21.946210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10861 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:21.946228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:21.954571] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef46d0 00:36:08.471 [2024-12-15 05:36:21.955962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23894 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:21.955980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:21.962826] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efb048 00:36:08.471 [2024-12-15 05:36:21.964183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:1854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:21.964201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:21.970573] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee7c50 00:36:08.471 [2024-12-15 05:36:21.971365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:21.971383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:21.982128] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee4de8 00:36:08.471 [2024-12-15 05:36:21.983428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:16780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:21.983446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:21.991777] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee6fa8 00:36:08.471 [2024-12-15 05:36:21.993276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:19670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:21.993294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:21.998582] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eea680 00:36:08.471 [2024-12-15 05:36:21.999270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:15416 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:21.999288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.009672] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef7970 00:36:08.471 [2024-12-15 05:36:22.010717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:7775 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.010734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.019972] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eed920 00:36:08.471 [2024-12-15 05:36:22.021509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:15206 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.021527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.026369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee88f8 00:36:08.471 [2024-12-15 05:36:22.027007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:11066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.027025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.036089] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eeee38 00:36:08.471 [2024-12-15 05:36:22.036984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:3821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.037004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.046181] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee9168 00:36:08.471 [2024-12-15 05:36:22.047179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:18577 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.047198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.054452] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efd208 00:36:08.471 [2024-12-15 05:36:22.055234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:8135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.055252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.064643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016edf550 00:36:08.471 [2024-12-15 05:36:22.065716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.065733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.071834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef8618 00:36:08.471 [2024-12-15 05:36:22.072387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.072405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.081762] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef7538 00:36:08.471 [2024-12-15 05:36:22.082788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.082805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.092584] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee0a68 00:36:08.471 [2024-12-15 05:36:22.094094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.094113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.098976] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efef90 00:36:08.471 [2024-12-15 05:36:22.099606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:9344 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.099624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.108442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee8d30 00:36:08.471 [2024-12-15 05:36:22.109179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.109198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.117897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee4578 00:36:08.471 [2024-12-15 05:36:22.118811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:7198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.118830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.126694] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efd208 00:36:08.471 [2024-12-15 05:36:22.127525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:14394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.127544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.135800] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef1ca0 00:36:08.471 [2024-12-15 05:36:22.136568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9345 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.136586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.146182] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee6738 00:36:08.471 [2024-12-15 05:36:22.147484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:5312 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.471 [2024-12-15 05:36:22.147502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:36:08.471 [2024-12-15 05:36:22.155921] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016edf988 00:36:08.731 [2024-12-15 05:36:22.157342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:25543 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.731 [2024-12-15 05:36:22.157364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:36:08.731 [2024-12-15 05:36:22.165561] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef2948 00:36:08.731 [2024-12-15 05:36:22.167062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:6867 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.731 [2024-12-15 05:36:22.167081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:08.731 [2024-12-15 05:36:22.171949] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eec408 00:36:08.731 [2024-12-15 05:36:22.172581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:421 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.731 [2024-12-15 05:36:22.172600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:36:08.731 [2024-12-15 05:36:22.182221] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efa3a0 00:36:08.731 [2024-12-15 05:36:22.183568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:20387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.731 [2024-12-15 05:36:22.183586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:36:08.731 [2024-12-15 05:36:22.189981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efb8b8 00:36:08.731 [2024-12-15 05:36:22.190741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:23905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.731 [2024-12-15 05:36:22.190760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:36:08.731 [2024-12-15 05:36:22.201066] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef3a28 00:36:08.731 [2024-12-15 05:36:22.202230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:3328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.202249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.209648] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eebfd0 00:36:08.732 [2024-12-15 05:36:22.210674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.210692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.219060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef5378 00:36:08.732 [2024-12-15 05:36:22.220166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.220184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.227647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eddc00 00:36:08.732 [2024-12-15 05:36:22.228868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6603 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.228887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.237265] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efac10 00:36:08.732 [2024-12-15 05:36:22.237963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:20862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.237981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.246206] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee4de8 00:36:08.732 [2024-12-15 05:36:22.247205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.247223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.255603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee1710 00:36:08.732 [2024-12-15 05:36:22.256626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:16992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.256644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.265045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef6890 00:36:08.732 [2024-12-15 05:36:22.266235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:5269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.266253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.274534] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee3498 00:36:08.732 [2024-12-15 05:36:22.275826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18231 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.275844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.283067] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee8d30 00:36:08.732 [2024-12-15 05:36:22.284193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:20514 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.284211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.292154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016edf550 00:36:08.732 [2024-12-15 05:36:22.293074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:6034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.293092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.301917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eecc78 00:36:08.732 [2024-12-15 05:36:22.303153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:4608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.303172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.311196] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef92c0 00:36:08.732 [2024-12-15 05:36:22.311906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:25415 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.311924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.319701] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efd640 00:36:08.732 [2024-12-15 05:36:22.320981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.321004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.328877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016efb480 00:36:08.732 [2024-12-15 05:36:22.329827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:12507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.329845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.337922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee0a68 00:36:08.732 [2024-12-15 05:36:22.338960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:1272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.338978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.347199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee5ec8 00:36:08.732 [2024-12-15 05:36:22.348305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:11000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.348324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.355940] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef0350 00:36:08.732 [2024-12-15 05:36:22.356863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:3268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.356882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.365000] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee12d8 00:36:08.732 [2024-12-15 05:36:22.365809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.365827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.373585] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eebb98 00:36:08.732 [2024-12-15 05:36:22.374398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:23840 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.374417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.384542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ee49b0 00:36:08.732 [2024-12-15 05:36:22.385777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:13769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.385795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.391960] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef7da8 00:36:08.732 [2024-12-15 05:36:22.392595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:15913 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.392618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.401472] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016eed4e8 00:36:08.732 [2024-12-15 05:36:22.402335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.402353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:36:08.732 [2024-12-15 05:36:22.410150] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef92c0 00:36:08.732 [2024-12-15 05:36:22.410990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:197 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.732 [2024-12-15 05:36:22.411010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:36:08.991 [2024-12-15 05:36:22.421677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016ef7100 00:36:08.991 [2024-12-15 05:36:22.423103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:8034 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.991 [2024-12-15 05:36:22.423121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:36:08.991 [2024-12-15 05:36:22.430274] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23810e0) with pdu=0x200016edf550 00:36:08.991 [2024-12-15 05:36:22.432567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:9111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:08.991 [2024-12-15 05:36:22.432586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:36:08.991 27781.50 IOPS, 108.52 MiB/s 00:36:08.991 Latency(us) 00:36:08.991 [2024-12-15T04:36:22.678Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:08.991 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:36:08.991 nvme0n1 : 2.01 27762.61 108.45 0.00 0.00 4606.60 1786.64 12982.37 00:36:08.991 [2024-12-15T04:36:22.678Z] =================================================================================================================== 00:36:08.991 [2024-12-15T04:36:22.678Z] Total : 27762.61 108.45 0.00 0.00 4606.60 1786.64 12982.37 00:36:08.991 { 00:36:08.991 "results": [ 00:36:08.991 { 00:36:08.991 "job": "nvme0n1", 00:36:08.991 "core_mask": "0x2", 00:36:08.991 "workload": "randwrite", 00:36:08.991 "status": "finished", 00:36:08.991 "queue_depth": 128, 00:36:08.991 "io_size": 4096, 00:36:08.991 "runtime": 2.00716, 00:36:08.991 "iops": 27762.609856712967, 00:36:08.991 "mibps": 108.44769475278503, 00:36:08.991 "io_failed": 0, 00:36:08.991 "io_timeout": 0, 00:36:08.991 "avg_latency_us": 4606.598362131731, 00:36:08.991 "min_latency_us": 1786.6361904761904, 00:36:08.991 "max_latency_us": 12982.369523809524 00:36:08.991 } 00:36:08.991 ], 00:36:08.991 "core_count": 1 00:36:08.991 } 00:36:08.991 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:08.991 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:08.991 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:08.991 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:08.991 | .driver_specific 00:36:08.991 | .nvme_error 00:36:08.991 | .status_code 00:36:08.991 | .command_transient_transport_error' 00:36:08.991 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:36:08.991 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 531532 00:36:08.991 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 531532 ']' 00:36:08.991 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 531532 00:36:08.991 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:08.991 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:08.991 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 531532 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 531532' 00:36:09.251 killing process with pid 531532 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 531532 00:36:09.251 Received shutdown signal, test time was about 2.000000 seconds 00:36:09.251 00:36:09.251 Latency(us) 00:36:09.251 [2024-12-15T04:36:22.938Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:09.251 [2024-12-15T04:36:22.938Z] =================================================================================================================== 00:36:09.251 [2024-12-15T04:36:22.938Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 531532 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=532000 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 532000 /var/tmp/bperf.sock 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 532000 ']' 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:36:09.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:09.251 05:36:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:09.251 [2024-12-15 05:36:22.906142] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:09.251 [2024-12-15 05:36:22.906192] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid532000 ] 00:36:09.251 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:09.251 Zero copy mechanism will not be used. 00:36:09.510 [2024-12-15 05:36:22.981519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:09.510 [2024-12-15 05:36:23.001213] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:09.510 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:09.510 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:36:09.510 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:09.510 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:36:09.769 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:36:09.769 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:09.769 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:09.769 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:09.769 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:09.769 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:36:10.027 nvme0n1 00:36:10.027 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:36:10.027 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:10.027 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:10.027 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:10.027 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:36:10.027 05:36:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:36:10.027 I/O size of 131072 is greater than zero copy threshold (65536). 00:36:10.027 Zero copy mechanism will not be used. 00:36:10.027 Running I/O for 2 seconds... 00:36:10.027 [2024-12-15 05:36:23.679059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.028 [2024-12-15 05:36:23.679148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.028 [2024-12-15 05:36:23.679178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.028 [2024-12-15 05:36:23.685055] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.028 [2024-12-15 05:36:23.685186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.028 [2024-12-15 05:36:23.685210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.028 [2024-12-15 05:36:23.690623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.028 [2024-12-15 05:36:23.690790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.028 [2024-12-15 05:36:23.690811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.028 [2024-12-15 05:36:23.697003] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.028 [2024-12-15 05:36:23.697144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.028 [2024-12-15 05:36:23.697164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.028 [2024-12-15 05:36:23.702868] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.028 [2024-12-15 05:36:23.702955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.028 [2024-12-15 05:36:23.702974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.028 [2024-12-15 05:36:23.708342] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.028 [2024-12-15 05:36:23.708436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.028 [2024-12-15 05:36:23.708455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.028 [2024-12-15 05:36:23.713740] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.028 [2024-12-15 05:36:23.713828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.028 [2024-12-15 05:36:23.713847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.289 [2024-12-15 05:36:23.719905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.289 [2024-12-15 05:36:23.719982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.289 [2024-12-15 05:36:23.720007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.289 [2024-12-15 05:36:23.725819] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.289 [2024-12-15 05:36:23.725894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.289 [2024-12-15 05:36:23.725913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.289 [2024-12-15 05:36:23.732955] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.289 [2024-12-15 05:36:23.733066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.289 [2024-12-15 05:36:23.733085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.289 [2024-12-15 05:36:23.740079] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.289 [2024-12-15 05:36:23.740156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.289 [2024-12-15 05:36:23.740174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.289 [2024-12-15 05:36:23.745696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.289 [2024-12-15 05:36:23.745760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.289 [2024-12-15 05:36:23.745778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.289 [2024-12-15 05:36:23.751463] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.289 [2024-12-15 05:36:23.751533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.289 [2024-12-15 05:36:23.751551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.289 [2024-12-15 05:36:23.756213] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.289 [2024-12-15 05:36:23.756287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.289 [2024-12-15 05:36:23.756306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.289 [2024-12-15 05:36:23.761033] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.289 [2024-12-15 05:36:23.761083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.289 [2024-12-15 05:36:23.761101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.289 [2024-12-15 05:36:23.765655] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.289 [2024-12-15 05:36:23.765725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.289 [2024-12-15 05:36:23.765743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.289 [2024-12-15 05:36:23.770270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.289 [2024-12-15 05:36:23.770338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.289 [2024-12-15 05:36:23.770356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.289 [2024-12-15 05:36:23.774811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.289 [2024-12-15 05:36:23.774880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.289 [2024-12-15 05:36:23.774898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.289 [2024-12-15 05:36:23.779411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.779485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.779502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.783967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.784024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.784042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.788536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.788589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.788610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.793107] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.793164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.793182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.797678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.797736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.797754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.802208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.802273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.802290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.806746] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.806799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.806817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.811323] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.811374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.811392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.815764] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.815825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.815843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.820207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.820261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.820278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.824699] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.824759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.824777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.829237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.829317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.829336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.833786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.833863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.833881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.838314] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.838369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.838387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.842857] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.842917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.842935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.847401] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.847470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.847488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.852001] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.852066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.852084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.856497] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.856575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.856594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.861061] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.861125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.861143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.865532] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.865606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.865625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.870339] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.870433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.870450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.874980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.875070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.875088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.880002] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.880172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.880190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.886078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.886269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.886288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.892933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.893086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.893105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.899538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.899695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.899713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.906294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.290 [2024-12-15 05:36:23.906457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.290 [2024-12-15 05:36:23.906475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.290 [2024-12-15 05:36:23.912557] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.291 [2024-12-15 05:36:23.912714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.291 [2024-12-15 05:36:23.912734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.291 [2024-12-15 05:36:23.918931] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.291 [2024-12-15 05:36:23.919078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.291 [2024-12-15 05:36:23.919101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.291 [2024-12-15 05:36:23.925748] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.291 [2024-12-15 05:36:23.925909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.291 [2024-12-15 05:36:23.925929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.291 [2024-12-15 05:36:23.931815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.291 [2024-12-15 05:36:23.931955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.291 [2024-12-15 05:36:23.931989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.291 [2024-12-15 05:36:23.938057] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.291 [2024-12-15 05:36:23.938242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.291 [2024-12-15 05:36:23.938262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.291 [2024-12-15 05:36:23.944202] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.291 [2024-12-15 05:36:23.944372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.291 [2024-12-15 05:36:23.944390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.291 [2024-12-15 05:36:23.950355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.291 [2024-12-15 05:36:23.950516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.291 [2024-12-15 05:36:23.950534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.291 [2024-12-15 05:36:23.956743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.291 [2024-12-15 05:36:23.956937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.291 [2024-12-15 05:36:23.956955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.291 [2024-12-15 05:36:23.962958] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.291 [2024-12-15 05:36:23.963122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.291 [2024-12-15 05:36:23.963141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.291 [2024-12-15 05:36:23.969137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.291 [2024-12-15 05:36:23.969312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.291 [2024-12-15 05:36:23.969331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:23.974959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:23.975056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:23.975075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:23.981133] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:23.981286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:23.981305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:23.987390] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:23.987562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:23.987581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:23.993877] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:23.994031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:23.994050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.000294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.000452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.000470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.006547] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.006712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.006730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.013091] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.013247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.013265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.019774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.019938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.019958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.025920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.025970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.025987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.031741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.031837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.031854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.036507] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.036761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.036780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.041455] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.041664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.041683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.046420] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.046666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.046685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.050981] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.051233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.051251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.055495] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.055744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.055763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.061063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.061303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.061322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.065828] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.066076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.066095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.070318] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.070558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.070581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.074813] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.075062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.075081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.079165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.079406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.079425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.083738] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.083987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.084011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.088659] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.088894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.088913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.093591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.551 [2024-12-15 05:36:24.093834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.551 [2024-12-15 05:36:24.093853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.551 [2024-12-15 05:36:24.098922] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.099226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.099245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.105627] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.105954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.105973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.112394] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.112678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.112697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.119253] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.119490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.119509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.127047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.127360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.127379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.133853] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.134142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.134162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.140485] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.140787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.140806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.147498] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.147740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.147759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.154026] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.154318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.154337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.160841] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.161151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.161169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.167500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.167827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.167846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.174444] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.174765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.174784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.181207] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.181475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.181495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.188311] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.188607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.188627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.194190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.194427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.194447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.199542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.199787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.199807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.204988] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.205258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.205278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.211146] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.211385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.211405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.215895] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.216127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.216146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.220779] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.221045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.221064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.226759] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.227025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.227048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.552 [2024-12-15 05:36:24.233549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.552 [2024-12-15 05:36:24.233876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.552 [2024-12-15 05:36:24.233896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.812 [2024-12-15 05:36:24.240616] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.812 [2024-12-15 05:36:24.240922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.812 [2024-12-15 05:36:24.240942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.812 [2024-12-15 05:36:24.246696] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.812 [2024-12-15 05:36:24.246887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.812 [2024-12-15 05:36:24.246905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.812 [2024-12-15 05:36:24.251411] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.812 [2024-12-15 05:36:24.251620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.812 [2024-12-15 05:36:24.251639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.812 [2024-12-15 05:36:24.256421] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.812 [2024-12-15 05:36:24.256614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.812 [2024-12-15 05:36:24.256634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.812 [2024-12-15 05:36:24.262090] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.812 [2024-12-15 05:36:24.262279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.812 [2024-12-15 05:36:24.262299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.812 [2024-12-15 05:36:24.267294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.812 [2024-12-15 05:36:24.267486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.812 [2024-12-15 05:36:24.267505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.812 [2024-12-15 05:36:24.272141] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.272369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.272388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.277268] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.277465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.277485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.281920] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.282130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.282149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.286869] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.287058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.287078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.292500] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.292719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.292738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.298173] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.298407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.298426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.304294] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.304612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.304632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.311060] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.311241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.311261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.317228] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.317457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.317477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.322588] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.322810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.322829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.328379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.328682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.328702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.335864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.336056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.336073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.341959] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.342264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.342283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.347647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.347974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.347998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.353662] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.353951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.353971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.360474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.360733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.360753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.366466] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.366693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.366712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.372673] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.372966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.372985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.379743] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.379913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.379935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.384967] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.385128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.385146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.390435] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.390631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.390650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.395908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.396074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.396092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.401668] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.401751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.401769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.407388] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.407584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.407602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.412808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.412979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.413004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.417643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.417705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.417723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.422686] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.813 [2024-12-15 05:36:24.422863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.813 [2024-12-15 05:36:24.422881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.813 [2024-12-15 05:36:24.426725] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.426926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.426945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.430653] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.430845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.430863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.434434] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.434615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.434634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.438214] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.438413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.438431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.442094] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.442278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.442298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.445936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.446134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.446152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.449811] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.450009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.450026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.453657] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.453862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.453881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.457491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.457699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.457723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.461351] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.461562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.461581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.465201] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.465403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.465421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.469142] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.469330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.469347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.473483] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.473652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.473669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.477916] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.478061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.478079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.482288] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.482394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.482412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.487063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.487207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.487225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.491775] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.491929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.491947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:10.814 [2024-12-15 05:36:24.496297] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:10.814 [2024-12-15 05:36:24.496439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:10.814 [2024-12-15 05:36:24.496461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.074 [2024-12-15 05:36:24.501145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.074 [2024-12-15 05:36:24.501263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.074 [2024-12-15 05:36:24.501281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.074 [2024-12-15 05:36:24.505872] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.074 [2024-12-15 05:36:24.506163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.074 [2024-12-15 05:36:24.506182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.074 [2024-12-15 05:36:24.510749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.074 [2024-12-15 05:36:24.510920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.074 [2024-12-15 05:36:24.510937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.074 [2024-12-15 05:36:24.515278] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.074 [2024-12-15 05:36:24.515406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.074 [2024-12-15 05:36:24.515424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.074 [2024-12-15 05:36:24.520169] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.074 [2024-12-15 05:36:24.520314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.074 [2024-12-15 05:36:24.520331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.074 [2024-12-15 05:36:24.524928] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.074 [2024-12-15 05:36:24.525331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.074 [2024-12-15 05:36:24.525350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.074 [2024-12-15 05:36:24.529554] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.074 [2024-12-15 05:36:24.529699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.074 [2024-12-15 05:36:24.529716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.533913] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.534072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.534090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.538815] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.538948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.538965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.543010] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.543159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.543182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.547756] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.547890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.547908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.552270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.552435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.552453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.556643] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.556796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.556814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.561430] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.561529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.561547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.566172] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.566300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.566318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.570741] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.570883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.570900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.574874] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.575045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.575062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.579119] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.579223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.579240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.582951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.583074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.583094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.586834] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.586933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.586951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.590745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.590866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.590883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.594623] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.594733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.594751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.598504] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.598615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.598633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.602369] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.602462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.602480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.606745] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.606842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.606860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.610879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.611010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.611032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.615129] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.615263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.615282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.619896] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.620013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.620031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.624270] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.624365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.624382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.628292] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.628403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.628422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.632132] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.632231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.632248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.636059] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.075 [2024-12-15 05:36:24.636181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.075 [2024-12-15 05:36:24.636198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.075 [2024-12-15 05:36:24.640039] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.640145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.640163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.643933] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.644054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.644072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.647754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.647862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.647881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.651839] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.651935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.651953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.656320] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.656443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.656461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.660471] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.660592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.660610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.664422] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.664534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.664552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.668299] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.668422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.668440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.672261] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.672383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.672401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.076 5944.00 IOPS, 743.00 MiB/s [2024-12-15T04:36:24.763Z] [2024-12-15 05:36:24.677121] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.677270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.677288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.681048] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.681211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.681228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.684908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.685047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.685066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.688702] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.688864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.688883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.692542] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.692685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.692704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.696391] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.696540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.696558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.700237] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.700395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.700412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.704080] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.704244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.704262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.707951] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.708128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.708146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.712480] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.712710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.712730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.717622] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.717812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.717833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.721790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.721931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.721949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.725884] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.726046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.726064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.730215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.730385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.730403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.734165] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.734334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.734352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.738109] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.738244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.738261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.742125] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.742319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.742337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.746004] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.746130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.076 [2024-12-15 05:36:24.746148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.076 [2024-12-15 05:36:24.750137] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.076 [2024-12-15 05:36:24.750381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.077 [2024-12-15 05:36:24.750400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.077 [2024-12-15 05:36:24.755186] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.077 [2024-12-15 05:36:24.755339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.077 [2024-12-15 05:36:24.755357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.336 [2024-12-15 05:36:24.759663] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.336 [2024-12-15 05:36:24.759810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.336 [2024-12-15 05:36:24.759828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.764236] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.764393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.764411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.768780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.768940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.768958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.774337] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.774445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.774462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.778556] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.778682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.778700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.782529] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.782639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.782657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.786577] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.786697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.786714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.790553] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.790664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.790682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.794538] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.794675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.794692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.798514] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.798646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.798663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.802429] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.802593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.802610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.806442] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.806570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.806587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.810864] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.811011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.811029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.815398] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.815525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.815543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.819426] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.819549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.819566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.824785] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.824898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.824916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.829678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.829787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.829807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.833700] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.833831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.833847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.837593] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.837735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.837753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.841591] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.841718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.841736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.845576] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.845720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.845737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.849603] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.849729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.849746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.853805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.853915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.853933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.857847] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.857976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.857999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.861879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.862002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.862019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.865820] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.865926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.865944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.337 [2024-12-15 05:36:24.869802] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.337 [2024-12-15 05:36:24.869933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.337 [2024-12-15 05:36:24.869950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.873804] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.873912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.873930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.877929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.878068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.878086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.881917] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.882035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.882053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.885908] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.886049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.886067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.889873] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.890003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.890020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.894544] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.894717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.894735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.899082] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.899281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.899300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.903750] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.903900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.903918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.909078] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.909229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.909248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.915075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.915322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.915342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.921427] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.921654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.921673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.928521] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.928714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.928732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.935045] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.935180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.935198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.941790] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.942063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.942082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.948805] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.949045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.949065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.955199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.955370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.955391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.961773] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.961966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.961985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.968729] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.968957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.968976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.975562] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.975680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.975698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.982502] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.982725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.982744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.988647] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.988875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.988894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.994238] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.994363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.994380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:24.998791] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:24.998931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:24.998948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:25.002770] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:25.002910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:25.002927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:25.006697] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:25.006839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:25.006856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:25.010641] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:25.010762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:25.010779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:25.014669] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:25.014813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.338 [2024-12-15 05:36:25.014831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.338 [2024-12-15 05:36:25.019296] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.338 [2024-12-15 05:36:25.019406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.339 [2024-12-15 05:36:25.019424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.599 [2024-12-15 05:36:25.023795] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.599 [2024-12-15 05:36:25.023942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.599 [2024-12-15 05:36:25.023971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.599 [2024-12-15 05:36:25.027930] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.599 [2024-12-15 05:36:25.028081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.599 [2024-12-15 05:36:25.028100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.599 [2024-12-15 05:36:25.031936] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.599 [2024-12-15 05:36:25.032081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.599 [2024-12-15 05:36:25.032098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.599 [2024-12-15 05:36:25.035900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.599 [2024-12-15 05:36:25.036039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.599 [2024-12-15 05:36:25.036057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.599 [2024-12-15 05:36:25.039836] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.599 [2024-12-15 05:36:25.039981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.599 [2024-12-15 05:36:25.040003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.599 [2024-12-15 05:36:25.043897] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.599 [2024-12-15 05:36:25.044036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.599 [2024-12-15 05:36:25.044053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.599 [2024-12-15 05:36:25.047929] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.599 [2024-12-15 05:36:25.048046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.599 [2024-12-15 05:36:25.048064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.599 [2024-12-15 05:36:25.051886] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.599 [2024-12-15 05:36:25.052048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.599 [2024-12-15 05:36:25.052066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.055918] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.056062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.056080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.059879] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.060026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.060043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.064185] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.064308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.064325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.068215] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.068351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.068368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.072197] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.072334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.072351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.076216] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.076359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.076386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.080122] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.080247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.080265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.084145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.084262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.084280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.088881] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.089027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.089045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.093677] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.093787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.093805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.097863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.097982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.098005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.101844] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.101990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.102014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.105889] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.106017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.106035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.109863] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.110049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.110066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.113780] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.113898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.113916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.117794] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.117932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.117949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.122063] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.122190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.122207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.126549] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.126684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.126701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.131349] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.131480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.131498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.136145] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.136267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.136284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.140322] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.140453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.140470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.144489] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.144670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.144688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.149164] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.149404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.149423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.155047] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.155192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.155210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.160425] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.160572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.160595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.166341] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.166479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.166497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.600 [2024-12-15 05:36:25.172249] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.600 [2024-12-15 05:36:25.172362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.600 [2024-12-15 05:36:25.172379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.177231] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.177339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.177357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.181786] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.181930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.181947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.185808] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.185957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.185974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.189774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.189914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.189931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.193749] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.193913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.193934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.198462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.198594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.198612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.203845] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.203977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.204001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.210049] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.210217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.210235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.215566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.215692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.215711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.220755] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.220879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.220897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.225846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.226024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.226042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.230325] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.230478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.230496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.234910] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.235317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.235336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.239357] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.239528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.239546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.243218] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.243362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.243379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.247851] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.248018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.248035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.253734] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.253865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.253882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.258254] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.258361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.258379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.262570] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.262660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.262677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.266892] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.267006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.267024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.271124] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.271242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.271259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.275345] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.275504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.275523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.279536] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.279665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.279683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.601 [2024-12-15 05:36:25.283758] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.601 [2024-12-15 05:36:25.283881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.601 [2024-12-15 05:36:25.283899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.287899] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.288054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.288072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.291902] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.292017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.292035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.295952] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.296068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.296085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.300961] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.301124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.301142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.305727] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.305867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.305884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.310519] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.310696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.310715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.315645] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.315768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.315789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.320953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.321124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.321142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.326346] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.326597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.326616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.331548] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.331728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.331746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.337005] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.337295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.337314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.342340] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.342617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.342637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.347774] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.347956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.347974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.352953] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.353123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.353143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.357188] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.357358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.862 [2024-12-15 05:36:25.357376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.862 [2024-12-15 05:36:25.361462] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.862 [2024-12-15 05:36:25.361632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.361651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.365601] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.365775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.365794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.369735] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.369904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.369922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.373900] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.374063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.374082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.377846] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.378007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.378026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.381835] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.382000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.382018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.386760] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.386937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.386956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.391833] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.392022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.392040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.396219] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.396409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.396427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.400379] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.400532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.400550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.404624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.404844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.404863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.409887] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.410057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.410076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.414747] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.414952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.414972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.418885] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.419102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.419121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.422909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.423095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.423114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.427149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.427343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.427363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.431247] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.431422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.431441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.435358] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.435540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.435562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.439467] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.439645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.439663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.443321] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.443518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.443537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.447160] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.447343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.447361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.451151] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.451330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.451350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.455149] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.455354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.455374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.459071] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.459245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.459264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.463105] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.463306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.463326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.467009] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.467183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.467201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.471861] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.472065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.472084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.476287] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.476497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.476517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.481353] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.481540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.481557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.487017] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.487205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.487224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.492208] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.492503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.492523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.497563] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.863 [2024-12-15 05:36:25.497832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.863 [2024-12-15 05:36:25.497851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.863 [2024-12-15 05:36:25.503854] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.864 [2024-12-15 05:36:25.504158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.864 [2024-12-15 05:36:25.504177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.864 [2024-12-15 05:36:25.510075] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.864 [2024-12-15 05:36:25.510254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.864 [2024-12-15 05:36:25.510272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.864 [2024-12-15 05:36:25.516564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.864 [2024-12-15 05:36:25.516765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.864 [2024-12-15 05:36:25.516786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.864 [2024-12-15 05:36:25.523157] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.864 [2024-12-15 05:36:25.523341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.864 [2024-12-15 05:36:25.523361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:11.864 [2024-12-15 05:36:25.528355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.864 [2024-12-15 05:36:25.528501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.864 [2024-12-15 05:36:25.528519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:11.864 [2024-12-15 05:36:25.534609] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.864 [2024-12-15 05:36:25.534787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.864 [2024-12-15 05:36:25.534805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:11.864 [2024-12-15 05:36:25.539432] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.864 [2024-12-15 05:36:25.539587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.864 [2024-12-15 05:36:25.539605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:11.864 [2024-12-15 05:36:25.543797] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:11.864 [2024-12-15 05:36:25.543944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:11.864 [2024-12-15 05:36:25.543964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.548474] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.548630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.548649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.552947] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.553119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.553137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.557564] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.557718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.557736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.561905] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.562071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.562098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.566491] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.566642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.566660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.571016] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.571167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.571185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.576190] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.576342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.576360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.580511] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.580669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.580687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.584754] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.584902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.584921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.589459] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.589611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.589629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.593986] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.594171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.594189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.598676] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.598838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.598857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.603355] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.603514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.603535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.607698] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.607847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.607866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.611909] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.612069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.612088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.616678] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.616828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.616846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.621154] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.621310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.621328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.625569] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.625714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.625732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.630147] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.630302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.630320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.634980] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.635139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.635157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.639305] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.639458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.639477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.644210] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.644362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.644381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.648624] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.648781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.648799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.653018] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.653173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.653191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.657304] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.124 [2024-12-15 05:36:25.657461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.124 [2024-12-15 05:36:25.657479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.124 [2024-12-15 05:36:25.661566] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.125 [2024-12-15 05:36:25.661712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.125 [2024-12-15 05:36:25.661730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.125 [2024-12-15 05:36:25.665541] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.125 [2024-12-15 05:36:25.665696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.125 [2024-12-15 05:36:25.665714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:36:12.125 [2024-12-15 05:36:25.670056] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.125 [2024-12-15 05:36:25.670190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.125 [2024-12-15 05:36:25.670208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:36:12.125 [2024-12-15 05:36:25.675199] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.125 [2024-12-15 05:36:25.675331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.125 [2024-12-15 05:36:25.675350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:36:12.125 6348.50 IOPS, 793.56 MiB/s [2024-12-15T04:36:25.812Z] [2024-12-15 05:36:25.680031] tcp.c:2241:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x23815c0) with pdu=0x200016eff3c8 00:36:12.125 [2024-12-15 05:36:25.680091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:12.125 [2024-12-15 05:36:25.680109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:36:12.125 00:36:12.125 Latency(us) 00:36:12.125 [2024-12-15T04:36:25.812Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:12.125 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:36:12.125 nvme0n1 : 2.00 6347.66 793.46 0.00 0.00 2516.48 1466.76 9736.78 00:36:12.125 [2024-12-15T04:36:25.812Z] =================================================================================================================== 00:36:12.125 [2024-12-15T04:36:25.812Z] Total : 6347.66 793.46 0.00 0.00 2516.48 1466.76 9736.78 00:36:12.125 { 00:36:12.125 "results": [ 00:36:12.125 { 00:36:12.125 "job": "nvme0n1", 00:36:12.125 "core_mask": "0x2", 00:36:12.125 "workload": "randwrite", 00:36:12.125 "status": "finished", 00:36:12.125 "queue_depth": 16, 00:36:12.125 "io_size": 131072, 00:36:12.125 "runtime": 2.003417, 00:36:12.125 "iops": 6347.655031378889, 00:36:12.125 "mibps": 793.4568789223612, 00:36:12.125 "io_failed": 0, 00:36:12.125 "io_timeout": 0, 00:36:12.125 "avg_latency_us": 2516.478705894247, 00:36:12.125 "min_latency_us": 1466.7580952380952, 00:36:12.125 "max_latency_us": 9736.777142857143 00:36:12.125 } 00:36:12.125 ], 00:36:12.125 "core_count": 1 00:36:12.125 } 00:36:12.125 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:36:12.125 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:36:12.125 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:36:12.125 | .driver_specific 00:36:12.125 | .nvme_error 00:36:12.125 | .status_code 00:36:12.125 | .command_transient_transport_error' 00:36:12.125 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:36:12.384 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 411 > 0 )) 00:36:12.384 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 532000 00:36:12.384 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 532000 ']' 00:36:12.384 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 532000 00:36:12.384 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:12.384 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:12.384 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 532000 00:36:12.384 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:12.384 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:12.384 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 532000' 00:36:12.384 killing process with pid 532000 00:36:12.384 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 532000 00:36:12.384 Received shutdown signal, test time was about 2.000000 seconds 00:36:12.384 00:36:12.384 Latency(us) 00:36:12.384 [2024-12-15T04:36:26.071Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:12.384 [2024-12-15T04:36:26.071Z] =================================================================================================================== 00:36:12.384 [2024-12-15T04:36:26.071Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:12.384 05:36:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 532000 00:36:12.643 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 530374 00:36:12.643 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 530374 ']' 00:36:12.643 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 530374 00:36:12.643 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:36:12.643 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:12.643 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 530374 00:36:12.643 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:12.643 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:12.643 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 530374' 00:36:12.643 killing process with pid 530374 00:36:12.643 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 530374 00:36:12.643 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 530374 00:36:12.643 00:36:12.643 real 0m13.789s 00:36:12.643 user 0m26.230s 00:36:12.643 sys 0m4.721s 00:36:12.643 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:12.643 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:36:12.643 ************************************ 00:36:12.643 END TEST nvmf_digest_error 00:36:12.643 ************************************ 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # sync 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set +e 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:12.902 rmmod nvme_tcp 00:36:12.902 rmmod nvme_fabrics 00:36:12.902 rmmod nvme_keyring 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@128 -- # set -e 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # return 0 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@517 -- # '[' -n 530374 ']' 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@518 -- # killprocess 530374 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 530374 ']' 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 530374 00:36:12.902 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (530374) - No such process 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 530374 is not found' 00:36:12.902 Process with pid 530374 is not found 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # iptr 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-save 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@791 -- # iptables-restore 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:12.902 05:36:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:14.808 05:36:28 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:14.808 00:36:14.808 real 0m35.747s 00:36:14.808 user 0m54.027s 00:36:14.808 sys 0m13.867s 00:36:14.808 05:36:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:14.808 05:36:28 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:36:14.808 ************************************ 00:36:14.808 END TEST nvmf_digest 00:36:14.808 ************************************ 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:15.068 ************************************ 00:36:15.068 START TEST nvmf_bdevperf 00:36:15.068 ************************************ 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:36:15.068 * Looking for test storage... 00:36:15.068 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.068 --rc genhtml_branch_coverage=1 00:36:15.068 --rc genhtml_function_coverage=1 00:36:15.068 --rc genhtml_legend=1 00:36:15.068 --rc geninfo_all_blocks=1 00:36:15.068 --rc geninfo_unexecuted_blocks=1 00:36:15.068 00:36:15.068 ' 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.068 --rc genhtml_branch_coverage=1 00:36:15.068 --rc genhtml_function_coverage=1 00:36:15.068 --rc genhtml_legend=1 00:36:15.068 --rc geninfo_all_blocks=1 00:36:15.068 --rc geninfo_unexecuted_blocks=1 00:36:15.068 00:36:15.068 ' 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.068 --rc genhtml_branch_coverage=1 00:36:15.068 --rc genhtml_function_coverage=1 00:36:15.068 --rc genhtml_legend=1 00:36:15.068 --rc geninfo_all_blocks=1 00:36:15.068 --rc geninfo_unexecuted_blocks=1 00:36:15.068 00:36:15.068 ' 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:15.068 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:15.068 --rc genhtml_branch_coverage=1 00:36:15.068 --rc genhtml_function_coverage=1 00:36:15.068 --rc genhtml_legend=1 00:36:15.068 --rc geninfo_all_blocks=1 00:36:15.068 --rc geninfo_unexecuted_blocks=1 00:36:15.068 00:36:15.068 ' 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:15.068 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:15.069 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:15.069 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:15.069 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:15.069 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:15.069 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:36:15.328 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # : 0 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:15.329 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # xtrace_disable 00:36:15.329 05:36:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.900 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:21.900 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # pci_devs=() 00:36:21.900 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:21.900 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:21.900 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # net_devs=() 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # e810=() 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # local -ga e810 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # x722=() 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # local -ga x722 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # mlx=() 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@322 -- # local -ga mlx 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:21.901 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:21.901 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:21.901 Found net devices under 0000:af:00.0: cvl_0_0 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:21.901 Found net devices under 0000:af:00.1: cvl_0_1 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@442 -- # is_hw=yes 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:21.901 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:21.901 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.380 ms 00:36:21.901 00:36:21.901 --- 10.0.0.2 ping statistics --- 00:36:21.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.901 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:21.901 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:21.901 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.209 ms 00:36:21.901 00:36:21.901 --- 10.0.0.1 ping statistics --- 00:36:21.901 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:21.901 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # return 0 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=536071 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 536071 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 536071 ']' 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:21.901 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.901 [2024-12-15 05:36:34.646988] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:21.901 [2024-12-15 05:36:34.647043] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:21.901 [2024-12-15 05:36:34.725208] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:21.901 [2024-12-15 05:36:34.747316] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:21.901 [2024-12-15 05:36:34.747352] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:21.901 [2024-12-15 05:36:34.747360] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:21.901 [2024-12-15 05:36:34.747366] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:21.901 [2024-12-15 05:36:34.747371] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:21.901 [2024-12-15 05:36:34.748621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:21.901 [2024-12-15 05:36:34.748729] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:21.901 [2024-12-15 05:36:34.748731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.901 [2024-12-15 05:36:34.887950] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.901 Malloc0 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:21.901 [2024-12-15 05:36:34.963661] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:21.901 { 00:36:21.901 "params": { 00:36:21.901 "name": "Nvme$subsystem", 00:36:21.901 "trtype": "$TEST_TRANSPORT", 00:36:21.901 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:21.901 "adrfam": "ipv4", 00:36:21.901 "trsvcid": "$NVMF_PORT", 00:36:21.901 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:21.901 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:21.901 "hdgst": ${hdgst:-false}, 00:36:21.901 "ddgst": ${ddgst:-false} 00:36:21.901 }, 00:36:21.901 "method": "bdev_nvme_attach_controller" 00:36:21.901 } 00:36:21.901 EOF 00:36:21.901 )") 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:21.901 05:36:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:21.901 "params": { 00:36:21.901 "name": "Nvme1", 00:36:21.901 "trtype": "tcp", 00:36:21.901 "traddr": "10.0.0.2", 00:36:21.901 "adrfam": "ipv4", 00:36:21.901 "trsvcid": "4420", 00:36:21.901 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:21.901 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:21.901 "hdgst": false, 00:36:21.901 "ddgst": false 00:36:21.901 }, 00:36:21.901 "method": "bdev_nvme_attach_controller" 00:36:21.901 }' 00:36:21.901 [2024-12-15 05:36:35.015198] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:21.901 [2024-12-15 05:36:35.015237] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid536159 ] 00:36:21.901 [2024-12-15 05:36:35.088705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:21.901 [2024-12-15 05:36:35.111323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:21.901 Running I/O for 1 seconds... 00:36:22.838 11351.00 IOPS, 44.34 MiB/s 00:36:22.838 Latency(us) 00:36:22.838 [2024-12-15T04:36:36.525Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:22.838 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:22.838 Verification LBA range: start 0x0 length 0x4000 00:36:22.838 Nvme1n1 : 1.00 11440.83 44.69 0.00 0.00 11145.03 1131.28 10423.34 00:36:22.838 [2024-12-15T04:36:36.525Z] =================================================================================================================== 00:36:22.838 [2024-12-15T04:36:36.525Z] Total : 11440.83 44.69 0.00 0.00 11145.03 1131.28 10423.34 00:36:23.097 05:36:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=536388 00:36:23.097 05:36:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:36:23.097 05:36:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:36:23.097 05:36:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:36:23.097 05:36:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # config=() 00:36:23.097 05:36:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@560 -- # local subsystem config 00:36:23.097 05:36:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:36:23.097 05:36:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:36:23.097 { 00:36:23.097 "params": { 00:36:23.097 "name": "Nvme$subsystem", 00:36:23.097 "trtype": "$TEST_TRANSPORT", 00:36:23.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:36:23.097 "adrfam": "ipv4", 00:36:23.097 "trsvcid": "$NVMF_PORT", 00:36:23.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:36:23.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:36:23.097 "hdgst": ${hdgst:-false}, 00:36:23.097 "ddgst": ${ddgst:-false} 00:36:23.097 }, 00:36:23.097 "method": "bdev_nvme_attach_controller" 00:36:23.097 } 00:36:23.097 EOF 00:36:23.097 )") 00:36:23.097 05:36:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@582 -- # cat 00:36:23.097 05:36:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@584 -- # jq . 00:36:23.097 05:36:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@585 -- # IFS=, 00:36:23.097 05:36:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:36:23.097 "params": { 00:36:23.097 "name": "Nvme1", 00:36:23.097 "trtype": "tcp", 00:36:23.097 "traddr": "10.0.0.2", 00:36:23.097 "adrfam": "ipv4", 00:36:23.097 "trsvcid": "4420", 00:36:23.097 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:36:23.097 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:36:23.097 "hdgst": false, 00:36:23.097 "ddgst": false 00:36:23.097 }, 00:36:23.097 "method": "bdev_nvme_attach_controller" 00:36:23.097 }' 00:36:23.097 [2024-12-15 05:36:36.627320] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:23.097 [2024-12-15 05:36:36.627368] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid536388 ] 00:36:23.097 [2024-12-15 05:36:36.701239] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:36:23.097 [2024-12-15 05:36:36.721121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:36:23.356 Running I/O for 15 seconds... 00:36:25.230 11419.00 IOPS, 44.61 MiB/s [2024-12-15T04:36:39.857Z] 11387.00 IOPS, 44.48 MiB/s [2024-12-15T04:36:39.857Z] 05:36:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 536071 00:36:26.171 05:36:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:36:26.171 [2024-12-15 05:36:39.606589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:116112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.171 [2024-12-15 05:36:39.606629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:116496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:116504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:116512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:116520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:116528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:116536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:116544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:116552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:116560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:116568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:116576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:116584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:116592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:116600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:116608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:116616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:116624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:116632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:116640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:116648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.606985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.606999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:116656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:116664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:116672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:116680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:116688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:116696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:116704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:116712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:116720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:116728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:116736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:116744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:116752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:116760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:116768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:116776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:116784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.171 [2024-12-15 05:36:39.607305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:116792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.171 [2024-12-15 05:36:39.607315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:116800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:116808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:116816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:116824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:116832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:116840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:116848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:116856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:116864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:116872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:116880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:116888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:116896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:116904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:116912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:116920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:116928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:116936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:116944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:116952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:116960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:116968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:116976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:116984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:116992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:117000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:117008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:117016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:117024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:117032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:117040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:117048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:117056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:117064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:117072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:117080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:117088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:117096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:117104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:117112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.172 [2024-12-15 05:36:39.607976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:117120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.172 [2024-12-15 05:36:39.607982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.607990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:117128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:36:26.173 [2024-12-15 05:36:39.608002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:116120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:116128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:116136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:116144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:116152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:116160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:116168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:116176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:116184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:116192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:116200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:116208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:116216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:116224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:116232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:116240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:116248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:116256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:116264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:116272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:116280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:116288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:116296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:116304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:116312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:116320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:116328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:116336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:116344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:116352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:116360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:116368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:116376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:116384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:116392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:116400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:116408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:116416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.173 [2024-12-15 05:36:39.608576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:116424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.173 [2024-12-15 05:36:39.608582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.174 [2024-12-15 05:36:39.608590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:116432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.174 [2024-12-15 05:36:39.608596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.174 [2024-12-15 05:36:39.608604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:116440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.174 [2024-12-15 05:36:39.608610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.174 [2024-12-15 05:36:39.608618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:116448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.174 [2024-12-15 05:36:39.608625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.174 [2024-12-15 05:36:39.608633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:116456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.174 [2024-12-15 05:36:39.608639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.174 [2024-12-15 05:36:39.608647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:116464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.174 [2024-12-15 05:36:39.608653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.174 [2024-12-15 05:36:39.608661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:116472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.174 [2024-12-15 05:36:39.608667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.174 [2024-12-15 05:36:39.608675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:116480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:36:26.174 [2024-12-15 05:36:39.608681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.174 [2024-12-15 05:36:39.608689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2065cb0 is same with the state(6) to be set 00:36:26.174 [2024-12-15 05:36:39.608699] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:36:26.174 [2024-12-15 05:36:39.608705] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:36:26.174 [2024-12-15 05:36:39.608710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:116488 len:8 PRP1 0x0 PRP2 0x0 00:36:26.174 [2024-12-15 05:36:39.608718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.174 [2024-12-15 05:36:39.608799] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:36:26.174 [2024-12-15 05:36:39.608808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.174 [2024-12-15 05:36:39.608816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:36:26.174 [2024-12-15 05:36:39.608823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.174 [2024-12-15 05:36:39.608829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:36:26.174 [2024-12-15 05:36:39.608836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.174 [2024-12-15 05:36:39.608844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:36:26.174 [2024-12-15 05:36:39.608850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:36:26.174 [2024-12-15 05:36:39.608856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.174 [2024-12-15 05:36:39.611627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.174 [2024-12-15 05:36:39.611655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.174 [2024-12-15 05:36:39.612275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.174 [2024-12-15 05:36:39.612292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.174 [2024-12-15 05:36:39.612303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.174 [2024-12-15 05:36:39.612477] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.174 [2024-12-15 05:36:39.612650] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.174 [2024-12-15 05:36:39.612658] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.174 [2024-12-15 05:36:39.612666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.174 [2024-12-15 05:36:39.612674] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.174 [2024-12-15 05:36:39.624740] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.174 [2024-12-15 05:36:39.625154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.174 [2024-12-15 05:36:39.625171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.174 [2024-12-15 05:36:39.625179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.174 [2024-12-15 05:36:39.625348] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.174 [2024-12-15 05:36:39.625519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.174 [2024-12-15 05:36:39.625527] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.174 [2024-12-15 05:36:39.625534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.174 [2024-12-15 05:36:39.625541] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.174 [2024-12-15 05:36:39.637565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.174 [2024-12-15 05:36:39.637966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.174 [2024-12-15 05:36:39.637984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.174 [2024-12-15 05:36:39.637998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.174 [2024-12-15 05:36:39.638182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.174 [2024-12-15 05:36:39.638351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.174 [2024-12-15 05:36:39.638359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.174 [2024-12-15 05:36:39.638365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.174 [2024-12-15 05:36:39.638371] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.174 [2024-12-15 05:36:39.650388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.174 [2024-12-15 05:36:39.650837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.174 [2024-12-15 05:36:39.650882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.174 [2024-12-15 05:36:39.650905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.174 [2024-12-15 05:36:39.651434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.174 [2024-12-15 05:36:39.651603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.174 [2024-12-15 05:36:39.651612] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.174 [2024-12-15 05:36:39.651619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.174 [2024-12-15 05:36:39.651625] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.174 [2024-12-15 05:36:39.663242] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.174 [2024-12-15 05:36:39.663675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.174 [2024-12-15 05:36:39.663692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.174 [2024-12-15 05:36:39.663700] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.174 [2024-12-15 05:36:39.663868] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.174 [2024-12-15 05:36:39.664043] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.174 [2024-12-15 05:36:39.664052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.174 [2024-12-15 05:36:39.664058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.174 [2024-12-15 05:36:39.664069] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.174 [2024-12-15 05:36:39.676021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.174 [2024-12-15 05:36:39.676337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.174 [2024-12-15 05:36:39.676354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.174 [2024-12-15 05:36:39.676362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.174 [2024-12-15 05:36:39.676530] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.174 [2024-12-15 05:36:39.676697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.174 [2024-12-15 05:36:39.676706] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.174 [2024-12-15 05:36:39.676713] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.174 [2024-12-15 05:36:39.676719] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.174 [2024-12-15 05:36:39.688777] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.174 [2024-12-15 05:36:39.689191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.174 [2024-12-15 05:36:39.689208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.174 [2024-12-15 05:36:39.689215] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.174 [2024-12-15 05:36:39.689375] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.175 [2024-12-15 05:36:39.689554] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.175 [2024-12-15 05:36:39.689563] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.175 [2024-12-15 05:36:39.689569] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.175 [2024-12-15 05:36:39.689575] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.175 [2024-12-15 05:36:39.701857] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.175 [2024-12-15 05:36:39.702287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.175 [2024-12-15 05:36:39.702304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.175 [2024-12-15 05:36:39.702311] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.175 [2024-12-15 05:36:39.702479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.175 [2024-12-15 05:36:39.702646] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.175 [2024-12-15 05:36:39.702654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.175 [2024-12-15 05:36:39.702660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.175 [2024-12-15 05:36:39.702666] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.175 [2024-12-15 05:36:39.714658] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.175 [2024-12-15 05:36:39.715110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.175 [2024-12-15 05:36:39.715166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.175 [2024-12-15 05:36:39.715189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.175 [2024-12-15 05:36:39.715701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.175 [2024-12-15 05:36:39.715870] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.175 [2024-12-15 05:36:39.715878] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.175 [2024-12-15 05:36:39.715885] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.175 [2024-12-15 05:36:39.715891] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.175 [2024-12-15 05:36:39.727587] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.175 [2024-12-15 05:36:39.728004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.175 [2024-12-15 05:36:39.728021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.175 [2024-12-15 05:36:39.728028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.175 [2024-12-15 05:36:39.728196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.175 [2024-12-15 05:36:39.728363] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.175 [2024-12-15 05:36:39.728371] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.175 [2024-12-15 05:36:39.728378] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.175 [2024-12-15 05:36:39.728384] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.175 [2024-12-15 05:36:39.740436] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.175 [2024-12-15 05:36:39.740871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.175 [2024-12-15 05:36:39.740887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.175 [2024-12-15 05:36:39.740894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.175 [2024-12-15 05:36:39.741076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.175 [2024-12-15 05:36:39.741244] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.175 [2024-12-15 05:36:39.741253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.175 [2024-12-15 05:36:39.741259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.175 [2024-12-15 05:36:39.741265] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.175 [2024-12-15 05:36:39.753163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.175 [2024-12-15 05:36:39.753577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.175 [2024-12-15 05:36:39.753593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.175 [2024-12-15 05:36:39.753603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.175 [2024-12-15 05:36:39.753771] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.175 [2024-12-15 05:36:39.753939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.175 [2024-12-15 05:36:39.753947] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.175 [2024-12-15 05:36:39.753954] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.175 [2024-12-15 05:36:39.753959] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.175 [2024-12-15 05:36:39.765936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.175 [2024-12-15 05:36:39.766356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.175 [2024-12-15 05:36:39.766402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.175 [2024-12-15 05:36:39.766425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.175 [2024-12-15 05:36:39.766923] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.175 [2024-12-15 05:36:39.767109] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.175 [2024-12-15 05:36:39.767118] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.175 [2024-12-15 05:36:39.767125] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.175 [2024-12-15 05:36:39.767130] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.175 [2024-12-15 05:36:39.778770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.175 [2024-12-15 05:36:39.779231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.175 [2024-12-15 05:36:39.779278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.175 [2024-12-15 05:36:39.779304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.175 [2024-12-15 05:36:39.779729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.175 [2024-12-15 05:36:39.779897] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.175 [2024-12-15 05:36:39.779906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.175 [2024-12-15 05:36:39.779912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.175 [2024-12-15 05:36:39.779919] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.175 [2024-12-15 05:36:39.791631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.175 [2024-12-15 05:36:39.792062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.175 [2024-12-15 05:36:39.792080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.175 [2024-12-15 05:36:39.792087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.175 [2024-12-15 05:36:39.792256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.175 [2024-12-15 05:36:39.792427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.175 [2024-12-15 05:36:39.792438] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.175 [2024-12-15 05:36:39.792445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.175 [2024-12-15 05:36:39.792452] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.175 [2024-12-15 05:36:39.804601] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.176 [2024-12-15 05:36:39.804946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.176 [2024-12-15 05:36:39.804962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.176 [2024-12-15 05:36:39.804969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.176 [2024-12-15 05:36:39.805142] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.176 [2024-12-15 05:36:39.805311] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.176 [2024-12-15 05:36:39.805319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.176 [2024-12-15 05:36:39.805325] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.176 [2024-12-15 05:36:39.805331] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.176 [2024-12-15 05:36:39.817376] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.176 [2024-12-15 05:36:39.817801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.176 [2024-12-15 05:36:39.817846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.176 [2024-12-15 05:36:39.817869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.176 [2024-12-15 05:36:39.818468] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.176 [2024-12-15 05:36:39.818673] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.176 [2024-12-15 05:36:39.818681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.176 [2024-12-15 05:36:39.818687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.176 [2024-12-15 05:36:39.818693] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.176 [2024-12-15 05:36:39.830269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.176 [2024-12-15 05:36:39.830706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.176 [2024-12-15 05:36:39.830750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.176 [2024-12-15 05:36:39.830774] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.176 [2024-12-15 05:36:39.831370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.176 [2024-12-15 05:36:39.831862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.176 [2024-12-15 05:36:39.831870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.176 [2024-12-15 05:36:39.831876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.176 [2024-12-15 05:36:39.831886] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.176 [2024-12-15 05:36:39.843095] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.176 [2024-12-15 05:36:39.843496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.176 [2024-12-15 05:36:39.843512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.176 [2024-12-15 05:36:39.843519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.176 [2024-12-15 05:36:39.843687] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.176 [2024-12-15 05:36:39.843855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.176 [2024-12-15 05:36:39.843863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.176 [2024-12-15 05:36:39.843869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.176 [2024-12-15 05:36:39.843875] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.437 [2024-12-15 05:36:39.856092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.437 [2024-12-15 05:36:39.856509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.437 [2024-12-15 05:36:39.856553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.437 [2024-12-15 05:36:39.856577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.437 [2024-12-15 05:36:39.856776] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.437 [2024-12-15 05:36:39.856945] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.437 [2024-12-15 05:36:39.856953] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.437 [2024-12-15 05:36:39.856960] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.437 [2024-12-15 05:36:39.856966] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.437 [2024-12-15 05:36:39.869223] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.437 [2024-12-15 05:36:39.869563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.437 [2024-12-15 05:36:39.869580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.437 [2024-12-15 05:36:39.869587] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.437 [2024-12-15 05:36:39.869760] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.437 [2024-12-15 05:36:39.869935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.437 [2024-12-15 05:36:39.869944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.437 [2024-12-15 05:36:39.869951] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.437 [2024-12-15 05:36:39.869957] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.437 [2024-12-15 05:36:39.882169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.437 [2024-12-15 05:36:39.882558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.437 [2024-12-15 05:36:39.882575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.437 [2024-12-15 05:36:39.882582] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.437 [2024-12-15 05:36:39.882755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.437 [2024-12-15 05:36:39.882927] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.437 [2024-12-15 05:36:39.882935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.437 [2024-12-15 05:36:39.882941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.437 [2024-12-15 05:36:39.882947] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.437 10299.33 IOPS, 40.23 MiB/s [2024-12-15T04:36:40.124Z] [2024-12-15 05:36:39.894926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.437 [2024-12-15 05:36:39.895354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.437 [2024-12-15 05:36:39.895371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.437 [2024-12-15 05:36:39.895379] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.437 [2024-12-15 05:36:39.895546] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.437 [2024-12-15 05:36:39.895714] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.437 [2024-12-15 05:36:39.895723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.437 [2024-12-15 05:36:39.895729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.437 [2024-12-15 05:36:39.895735] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.437 [2024-12-15 05:36:39.907770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.437 [2024-12-15 05:36:39.908184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.437 [2024-12-15 05:36:39.908201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.437 [2024-12-15 05:36:39.908208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.437 [2024-12-15 05:36:39.908376] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.437 [2024-12-15 05:36:39.908543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.437 [2024-12-15 05:36:39.908551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.437 [2024-12-15 05:36:39.908557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.437 [2024-12-15 05:36:39.908563] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.437 [2024-12-15 05:36:39.920597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.437 [2024-12-15 05:36:39.920960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.437 [2024-12-15 05:36:39.920976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.437 [2024-12-15 05:36:39.920985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.437 [2024-12-15 05:36:39.921175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.437 [2024-12-15 05:36:39.921343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.437 [2024-12-15 05:36:39.921351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.437 [2024-12-15 05:36:39.921358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.437 [2024-12-15 05:36:39.921364] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.437 [2024-12-15 05:36:39.933351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.437 [2024-12-15 05:36:39.933766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.437 [2024-12-15 05:36:39.933820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.437 [2024-12-15 05:36:39.933844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.437 [2024-12-15 05:36:39.934441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.437 [2024-12-15 05:36:39.935039] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.437 [2024-12-15 05:36:39.935066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.437 [2024-12-15 05:36:39.935086] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.437 [2024-12-15 05:36:39.935105] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.437 [2024-12-15 05:36:39.948619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.437 [2024-12-15 05:36:39.949118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.437 [2024-12-15 05:36:39.949140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.437 [2024-12-15 05:36:39.949151] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.437 [2024-12-15 05:36:39.949405] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.437 [2024-12-15 05:36:39.949659] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.437 [2024-12-15 05:36:39.949671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.438 [2024-12-15 05:36:39.949680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.438 [2024-12-15 05:36:39.949689] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.438 [2024-12-15 05:36:39.961678] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.438 [2024-12-15 05:36:39.962071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.438 [2024-12-15 05:36:39.962088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.438 [2024-12-15 05:36:39.962095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.438 [2024-12-15 05:36:39.962267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.438 [2024-12-15 05:36:39.962442] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.438 [2024-12-15 05:36:39.962450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.438 [2024-12-15 05:36:39.962457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.438 [2024-12-15 05:36:39.962463] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.438 [2024-12-15 05:36:39.974476] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.438 [2024-12-15 05:36:39.974871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.438 [2024-12-15 05:36:39.974887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.438 [2024-12-15 05:36:39.974894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.438 [2024-12-15 05:36:39.975078] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.438 [2024-12-15 05:36:39.975245] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.438 [2024-12-15 05:36:39.975253] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.438 [2024-12-15 05:36:39.975259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.438 [2024-12-15 05:36:39.975265] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.438 [2024-12-15 05:36:39.987314] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.438 [2024-12-15 05:36:39.987724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.438 [2024-12-15 05:36:39.987740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.438 [2024-12-15 05:36:39.987747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.438 [2024-12-15 05:36:39.987915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.438 [2024-12-15 05:36:39.988088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.438 [2024-12-15 05:36:39.988097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.438 [2024-12-15 05:36:39.988103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.438 [2024-12-15 05:36:39.988109] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.438 [2024-12-15 05:36:40.000184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.438 [2024-12-15 05:36:40.000533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.438 [2024-12-15 05:36:40.000552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.438 [2024-12-15 05:36:40.000561] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.438 [2024-12-15 05:36:40.000731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.438 [2024-12-15 05:36:40.000915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.438 [2024-12-15 05:36:40.000924] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.438 [2024-12-15 05:36:40.000934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.438 [2024-12-15 05:36:40.000940] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.438 [2024-12-15 05:36:40.013394] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.438 [2024-12-15 05:36:40.013846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.438 [2024-12-15 05:36:40.013865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.438 [2024-12-15 05:36:40.013874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.438 [2024-12-15 05:36:40.014065] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.438 [2024-12-15 05:36:40.014250] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.438 [2024-12-15 05:36:40.014260] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.438 [2024-12-15 05:36:40.014267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.438 [2024-12-15 05:36:40.014274] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.438 [2024-12-15 05:36:40.027676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.438 [2024-12-15 05:36:40.028084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.438 [2024-12-15 05:36:40.028108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.438 [2024-12-15 05:36:40.028120] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.438 [2024-12-15 05:36:40.028332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.438 [2024-12-15 05:36:40.028560] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.438 [2024-12-15 05:36:40.028574] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.438 [2024-12-15 05:36:40.028585] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.438 [2024-12-15 05:36:40.028595] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.438 [2024-12-15 05:36:40.041102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.438 [2024-12-15 05:36:40.041545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.438 [2024-12-15 05:36:40.041572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.438 [2024-12-15 05:36:40.041584] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.438 [2024-12-15 05:36:40.041783] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.438 [2024-12-15 05:36:40.041984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.438 [2024-12-15 05:36:40.042011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.438 [2024-12-15 05:36:40.042023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.438 [2024-12-15 05:36:40.042033] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.438 [2024-12-15 05:36:40.054149] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.438 [2024-12-15 05:36:40.054587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.438 [2024-12-15 05:36:40.054605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.438 [2024-12-15 05:36:40.054614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.438 [2024-12-15 05:36:40.054787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.438 [2024-12-15 05:36:40.054961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.438 [2024-12-15 05:36:40.054969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.438 [2024-12-15 05:36:40.054976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.438 [2024-12-15 05:36:40.054982] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.438 [2024-12-15 05:36:40.067190] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.438 [2024-12-15 05:36:40.067529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.438 [2024-12-15 05:36:40.067546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.438 [2024-12-15 05:36:40.067554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.438 [2024-12-15 05:36:40.067727] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.438 [2024-12-15 05:36:40.067899] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.438 [2024-12-15 05:36:40.067908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.438 [2024-12-15 05:36:40.067914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.438 [2024-12-15 05:36:40.067920] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.438 [2024-12-15 05:36:40.080330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.438 [2024-12-15 05:36:40.080733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.438 [2024-12-15 05:36:40.080749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.438 [2024-12-15 05:36:40.080757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.438 [2024-12-15 05:36:40.080930] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.438 [2024-12-15 05:36:40.081111] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.438 [2024-12-15 05:36:40.081120] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.438 [2024-12-15 05:36:40.081127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.438 [2024-12-15 05:36:40.081133] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.439 [2024-12-15 05:36:40.093412] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.439 [2024-12-15 05:36:40.093834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.439 [2024-12-15 05:36:40.093850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.439 [2024-12-15 05:36:40.093861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.439 [2024-12-15 05:36:40.094039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.439 [2024-12-15 05:36:40.094237] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.439 [2024-12-15 05:36:40.094246] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.439 [2024-12-15 05:36:40.094253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.439 [2024-12-15 05:36:40.094260] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.439 [2024-12-15 05:36:40.106423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.439 [2024-12-15 05:36:40.106868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.439 [2024-12-15 05:36:40.106885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.439 [2024-12-15 05:36:40.106893] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.439 [2024-12-15 05:36:40.107073] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.439 [2024-12-15 05:36:40.107246] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.439 [2024-12-15 05:36:40.107254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.439 [2024-12-15 05:36:40.107261] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.439 [2024-12-15 05:36:40.107267] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.439 [2024-12-15 05:36:40.119507] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.439 [2024-12-15 05:36:40.119958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.439 [2024-12-15 05:36:40.119975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.439 [2024-12-15 05:36:40.119984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.439 [2024-12-15 05:36:40.120164] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.439 [2024-12-15 05:36:40.120337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.439 [2024-12-15 05:36:40.120346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.439 [2024-12-15 05:36:40.120352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.439 [2024-12-15 05:36:40.120359] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.699 [2024-12-15 05:36:40.132598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.699 [2024-12-15 05:36:40.132959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.699 [2024-12-15 05:36:40.132976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.699 [2024-12-15 05:36:40.132984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.699 [2024-12-15 05:36:40.133165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.699 [2024-12-15 05:36:40.133343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.699 [2024-12-15 05:36:40.133352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.699 [2024-12-15 05:36:40.133359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.699 [2024-12-15 05:36:40.133365] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.699 [2024-12-15 05:36:40.145593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.699 [2024-12-15 05:36:40.146029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.699 [2024-12-15 05:36:40.146075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.699 [2024-12-15 05:36:40.146098] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.699 [2024-12-15 05:36:40.146682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.699 [2024-12-15 05:36:40.146947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.699 [2024-12-15 05:36:40.146955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.699 [2024-12-15 05:36:40.146962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.699 [2024-12-15 05:36:40.146968] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.699 [2024-12-15 05:36:40.158688] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.699 [2024-12-15 05:36:40.159035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.699 [2024-12-15 05:36:40.159053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.699 [2024-12-15 05:36:40.159061] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.699 [2024-12-15 05:36:40.159235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.699 [2024-12-15 05:36:40.159409] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.699 [2024-12-15 05:36:40.159417] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.699 [2024-12-15 05:36:40.159424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.700 [2024-12-15 05:36:40.159430] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.700 [2024-12-15 05:36:40.171819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.700 [2024-12-15 05:36:40.172166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.700 [2024-12-15 05:36:40.172182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.700 [2024-12-15 05:36:40.172190] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.700 [2024-12-15 05:36:40.172363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.700 [2024-12-15 05:36:40.172536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.700 [2024-12-15 05:36:40.172544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.700 [2024-12-15 05:36:40.172558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.700 [2024-12-15 05:36:40.172564] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.700 [2024-12-15 05:36:40.184793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.700 [2024-12-15 05:36:40.185137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.700 [2024-12-15 05:36:40.185154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.700 [2024-12-15 05:36:40.185161] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.700 [2024-12-15 05:36:40.185334] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.700 [2024-12-15 05:36:40.185506] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.700 [2024-12-15 05:36:40.185514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.700 [2024-12-15 05:36:40.185521] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.700 [2024-12-15 05:36:40.185527] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.700 [2024-12-15 05:36:40.197909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.700 [2024-12-15 05:36:40.198318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.700 [2024-12-15 05:36:40.198335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.700 [2024-12-15 05:36:40.198343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.700 [2024-12-15 05:36:40.198515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.700 [2024-12-15 05:36:40.198687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.700 [2024-12-15 05:36:40.198695] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.700 [2024-12-15 05:36:40.198702] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.700 [2024-12-15 05:36:40.198708] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.700 [2024-12-15 05:36:40.210957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.700 [2024-12-15 05:36:40.211368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.700 [2024-12-15 05:36:40.211386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.700 [2024-12-15 05:36:40.211393] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.700 [2024-12-15 05:36:40.211567] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.700 [2024-12-15 05:36:40.211739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.700 [2024-12-15 05:36:40.211748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.700 [2024-12-15 05:36:40.211754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.700 [2024-12-15 05:36:40.211761] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.700 [2024-12-15 05:36:40.223989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.700 [2024-12-15 05:36:40.224358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.700 [2024-12-15 05:36:40.224374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.700 [2024-12-15 05:36:40.224381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.700 [2024-12-15 05:36:40.224554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.700 [2024-12-15 05:36:40.224730] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.700 [2024-12-15 05:36:40.224739] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.700 [2024-12-15 05:36:40.224745] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.700 [2024-12-15 05:36:40.224752] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.700 [2024-12-15 05:36:40.236990] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.700 [2024-12-15 05:36:40.237397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.700 [2024-12-15 05:36:40.237414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.700 [2024-12-15 05:36:40.237422] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.700 [2024-12-15 05:36:40.237595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.700 [2024-12-15 05:36:40.237768] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.700 [2024-12-15 05:36:40.237776] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.700 [2024-12-15 05:36:40.237783] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.700 [2024-12-15 05:36:40.237789] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.700 [2024-12-15 05:36:40.250034] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.700 [2024-12-15 05:36:40.250387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.700 [2024-12-15 05:36:40.250404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.700 [2024-12-15 05:36:40.250411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.700 [2024-12-15 05:36:40.250584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.700 [2024-12-15 05:36:40.250756] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.700 [2024-12-15 05:36:40.250764] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.700 [2024-12-15 05:36:40.250770] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.700 [2024-12-15 05:36:40.250776] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.700 [2024-12-15 05:36:40.262997] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.700 [2024-12-15 05:36:40.263438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.700 [2024-12-15 05:36:40.263454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.700 [2024-12-15 05:36:40.263465] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.700 [2024-12-15 05:36:40.263638] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.700 [2024-12-15 05:36:40.263811] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.700 [2024-12-15 05:36:40.263819] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.700 [2024-12-15 05:36:40.263825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.700 [2024-12-15 05:36:40.263831] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.700 [2024-12-15 05:36:40.276092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.700 [2024-12-15 05:36:40.276377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.700 [2024-12-15 05:36:40.276393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.700 [2024-12-15 05:36:40.276400] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.700 [2024-12-15 05:36:40.276568] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.700 [2024-12-15 05:36:40.276736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.700 [2024-12-15 05:36:40.276744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.700 [2024-12-15 05:36:40.276766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.700 [2024-12-15 05:36:40.276773] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.700 [2024-12-15 05:36:40.289140] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.700 [2024-12-15 05:36:40.289438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.700 [2024-12-15 05:36:40.289455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.700 [2024-12-15 05:36:40.289462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.700 [2024-12-15 05:36:40.289633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.700 [2024-12-15 05:36:40.289806] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.700 [2024-12-15 05:36:40.289814] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.700 [2024-12-15 05:36:40.289821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.700 [2024-12-15 05:36:40.289827] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.700 [2024-12-15 05:36:40.302220] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.701 [2024-12-15 05:36:40.302507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.701 [2024-12-15 05:36:40.302523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.701 [2024-12-15 05:36:40.302530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.701 [2024-12-15 05:36:40.302703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.701 [2024-12-15 05:36:40.302879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.701 [2024-12-15 05:36:40.302888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.701 [2024-12-15 05:36:40.302894] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.701 [2024-12-15 05:36:40.302900] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.701 [2024-12-15 05:36:40.315250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.701 [2024-12-15 05:36:40.315580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.701 [2024-12-15 05:36:40.315596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.701 [2024-12-15 05:36:40.315604] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.701 [2024-12-15 05:36:40.315777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.701 [2024-12-15 05:36:40.315950] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.701 [2024-12-15 05:36:40.315958] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.701 [2024-12-15 05:36:40.315964] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.701 [2024-12-15 05:36:40.315971] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.701 [2024-12-15 05:36:40.328365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.701 [2024-12-15 05:36:40.328823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.701 [2024-12-15 05:36:40.328839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.701 [2024-12-15 05:36:40.328846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.701 [2024-12-15 05:36:40.329025] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.701 [2024-12-15 05:36:40.329198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.701 [2024-12-15 05:36:40.329206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.701 [2024-12-15 05:36:40.329213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.701 [2024-12-15 05:36:40.329219] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.701 [2024-12-15 05:36:40.341447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.701 [2024-12-15 05:36:40.341808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.701 [2024-12-15 05:36:40.341825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.701 [2024-12-15 05:36:40.341832] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.701 [2024-12-15 05:36:40.342011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.701 [2024-12-15 05:36:40.342184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.701 [2024-12-15 05:36:40.342193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.701 [2024-12-15 05:36:40.342202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.701 [2024-12-15 05:36:40.342209] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.701 [2024-12-15 05:36:40.354560] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.701 [2024-12-15 05:36:40.354874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.701 [2024-12-15 05:36:40.354919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.701 [2024-12-15 05:36:40.354943] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.701 [2024-12-15 05:36:40.355538] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.701 [2024-12-15 05:36:40.356088] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.701 [2024-12-15 05:36:40.356097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.701 [2024-12-15 05:36:40.356103] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.701 [2024-12-15 05:36:40.356110] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.701 [2024-12-15 05:36:40.367677] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.701 [2024-12-15 05:36:40.367977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.701 [2024-12-15 05:36:40.368000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.701 [2024-12-15 05:36:40.368008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.701 [2024-12-15 05:36:40.368181] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.701 [2024-12-15 05:36:40.368354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.701 [2024-12-15 05:36:40.368362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.701 [2024-12-15 05:36:40.368369] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.701 [2024-12-15 05:36:40.368375] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.701 [2024-12-15 05:36:40.380661] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.701 [2024-12-15 05:36:40.381022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.701 [2024-12-15 05:36:40.381069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.701 [2024-12-15 05:36:40.381093] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.701 [2024-12-15 05:36:40.381678] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.701 [2024-12-15 05:36:40.382257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.701 [2024-12-15 05:36:40.382266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.701 [2024-12-15 05:36:40.382273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.701 [2024-12-15 05:36:40.382279] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.962 [2024-12-15 05:36:40.393686] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.962 [2024-12-15 05:36:40.393976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.962 [2024-12-15 05:36:40.393998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.962 [2024-12-15 05:36:40.394005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.962 [2024-12-15 05:36:40.394178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.962 [2024-12-15 05:36:40.394351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.962 [2024-12-15 05:36:40.394359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.962 [2024-12-15 05:36:40.394366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.962 [2024-12-15 05:36:40.394372] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.962 [2024-12-15 05:36:40.406734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.962 [2024-12-15 05:36:40.407071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.962 [2024-12-15 05:36:40.407088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.962 [2024-12-15 05:36:40.407096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.962 [2024-12-15 05:36:40.407268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.962 [2024-12-15 05:36:40.407441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.962 [2024-12-15 05:36:40.407449] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.962 [2024-12-15 05:36:40.407456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.962 [2024-12-15 05:36:40.407463] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.962 [2024-12-15 05:36:40.419847] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.962 [2024-12-15 05:36:40.420136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.962 [2024-12-15 05:36:40.420154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.962 [2024-12-15 05:36:40.420162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.962 [2024-12-15 05:36:40.420335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.962 [2024-12-15 05:36:40.420508] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.962 [2024-12-15 05:36:40.420517] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.962 [2024-12-15 05:36:40.420523] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.962 [2024-12-15 05:36:40.420529] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.962 [2024-12-15 05:36:40.432912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.962 [2024-12-15 05:36:40.433325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.962 [2024-12-15 05:36:40.433370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.962 [2024-12-15 05:36:40.433401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.962 [2024-12-15 05:36:40.433984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.962 [2024-12-15 05:36:40.434418] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.962 [2024-12-15 05:36:40.434426] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.962 [2024-12-15 05:36:40.434433] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.962 [2024-12-15 05:36:40.434439] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.962 [2024-12-15 05:36:40.445998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.962 [2024-12-15 05:36:40.446433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.962 [2024-12-15 05:36:40.446477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.962 [2024-12-15 05:36:40.446500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.962 [2024-12-15 05:36:40.447017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.962 [2024-12-15 05:36:40.447190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.962 [2024-12-15 05:36:40.447199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.962 [2024-12-15 05:36:40.447206] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.962 [2024-12-15 05:36:40.447212] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.962 [2024-12-15 05:36:40.459040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.962 [2024-12-15 05:36:40.459477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.962 [2024-12-15 05:36:40.459522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.962 [2024-12-15 05:36:40.459545] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.962 [2024-12-15 05:36:40.460146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.962 [2024-12-15 05:36:40.460349] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.962 [2024-12-15 05:36:40.460358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.962 [2024-12-15 05:36:40.460365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.962 [2024-12-15 05:36:40.460371] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.962 [2024-12-15 05:36:40.472047] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.962 [2024-12-15 05:36:40.472456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.962 [2024-12-15 05:36:40.472473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.962 [2024-12-15 05:36:40.472481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.962 [2024-12-15 05:36:40.472652] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.962 [2024-12-15 05:36:40.472828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.962 [2024-12-15 05:36:40.472836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.962 [2024-12-15 05:36:40.472842] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.962 [2024-12-15 05:36:40.472848] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.962 [2024-12-15 05:36:40.485003] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.962 [2024-12-15 05:36:40.485358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.962 [2024-12-15 05:36:40.485390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.962 [2024-12-15 05:36:40.485416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.962 [2024-12-15 05:36:40.486017] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.962 [2024-12-15 05:36:40.486556] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.962 [2024-12-15 05:36:40.486564] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.962 [2024-12-15 05:36:40.486570] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.962 [2024-12-15 05:36:40.486576] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.962 [2024-12-15 05:36:40.498008] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.962 [2024-12-15 05:36:40.498459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.963 [2024-12-15 05:36:40.498504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.963 [2024-12-15 05:36:40.498527] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.963 [2024-12-15 05:36:40.499127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.963 [2024-12-15 05:36:40.499569] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.963 [2024-12-15 05:36:40.499578] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.963 [2024-12-15 05:36:40.499584] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.963 [2024-12-15 05:36:40.499590] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.963 [2024-12-15 05:36:40.510924] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.963 [2024-12-15 05:36:40.511308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.963 [2024-12-15 05:36:40.511353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.963 [2024-12-15 05:36:40.511375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.963 [2024-12-15 05:36:40.511957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.963 [2024-12-15 05:36:40.512518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.963 [2024-12-15 05:36:40.512528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.963 [2024-12-15 05:36:40.512538] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.963 [2024-12-15 05:36:40.512544] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.963 [2024-12-15 05:36:40.523846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.963 [2024-12-15 05:36:40.524295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.963 [2024-12-15 05:36:40.524311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.963 [2024-12-15 05:36:40.524318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.963 [2024-12-15 05:36:40.524490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.963 [2024-12-15 05:36:40.524663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.963 [2024-12-15 05:36:40.524671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.963 [2024-12-15 05:36:40.524677] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.963 [2024-12-15 05:36:40.524683] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.963 [2024-12-15 05:36:40.536821] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.963 [2024-12-15 05:36:40.537257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.963 [2024-12-15 05:36:40.537273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.963 [2024-12-15 05:36:40.537280] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.963 [2024-12-15 05:36:40.537452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.963 [2024-12-15 05:36:40.537624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.963 [2024-12-15 05:36:40.537633] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.963 [2024-12-15 05:36:40.537639] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.963 [2024-12-15 05:36:40.537645] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.963 [2024-12-15 05:36:40.549819] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.963 [2024-12-15 05:36:40.550267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.963 [2024-12-15 05:36:40.550283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.963 [2024-12-15 05:36:40.550290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.963 [2024-12-15 05:36:40.550463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.963 [2024-12-15 05:36:40.550636] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.963 [2024-12-15 05:36:40.550644] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.963 [2024-12-15 05:36:40.550650] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.963 [2024-12-15 05:36:40.550656] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.963 [2024-12-15 05:36:40.562838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.963 [2024-12-15 05:36:40.563293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.963 [2024-12-15 05:36:40.563309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.963 [2024-12-15 05:36:40.563316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.963 [2024-12-15 05:36:40.563488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.963 [2024-12-15 05:36:40.563661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.963 [2024-12-15 05:36:40.563669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.963 [2024-12-15 05:36:40.563675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.963 [2024-12-15 05:36:40.563681] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.963 [2024-12-15 05:36:40.575638] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.963 [2024-12-15 05:36:40.576064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.963 [2024-12-15 05:36:40.576109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.963 [2024-12-15 05:36:40.576132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.963 [2024-12-15 05:36:40.576595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.963 [2024-12-15 05:36:40.576754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.963 [2024-12-15 05:36:40.576762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.963 [2024-12-15 05:36:40.576768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.963 [2024-12-15 05:36:40.576773] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.963 [2024-12-15 05:36:40.588404] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.963 [2024-12-15 05:36:40.588793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.963 [2024-12-15 05:36:40.588808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.963 [2024-12-15 05:36:40.588815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.963 [2024-12-15 05:36:40.588973] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.963 [2024-12-15 05:36:40.589159] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.963 [2024-12-15 05:36:40.589168] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.963 [2024-12-15 05:36:40.589174] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.963 [2024-12-15 05:36:40.589180] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.963 [2024-12-15 05:36:40.601215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.963 [2024-12-15 05:36:40.601630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.963 [2024-12-15 05:36:40.601646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.963 [2024-12-15 05:36:40.601656] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.963 [2024-12-15 05:36:40.601815] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.963 [2024-12-15 05:36:40.601974] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.963 [2024-12-15 05:36:40.601982] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.963 [2024-12-15 05:36:40.601987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.963 [2024-12-15 05:36:40.601999] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.963 [2024-12-15 05:36:40.613986] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.963 [2024-12-15 05:36:40.614328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.963 [2024-12-15 05:36:40.614343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.963 [2024-12-15 05:36:40.614350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.963 [2024-12-15 05:36:40.614509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.963 [2024-12-15 05:36:40.614667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.963 [2024-12-15 05:36:40.614674] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.963 [2024-12-15 05:36:40.614680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.963 [2024-12-15 05:36:40.614685] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.963 [2024-12-15 05:36:40.626784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.963 [2024-12-15 05:36:40.627231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.963 [2024-12-15 05:36:40.627248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.963 [2024-12-15 05:36:40.627256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.964 [2024-12-15 05:36:40.627428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.964 [2024-12-15 05:36:40.627601] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.964 [2024-12-15 05:36:40.627609] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.964 [2024-12-15 05:36:40.627615] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.964 [2024-12-15 05:36:40.627621] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:26.964 [2024-12-15 05:36:40.639828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:26.964 [2024-12-15 05:36:40.640274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:26.964 [2024-12-15 05:36:40.640291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:26.964 [2024-12-15 05:36:40.640298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:26.964 [2024-12-15 05:36:40.640466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:26.964 [2024-12-15 05:36:40.640637] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:26.964 [2024-12-15 05:36:40.640645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:26.964 [2024-12-15 05:36:40.640652] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:26.964 [2024-12-15 05:36:40.640658] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.225 [2024-12-15 05:36:40.652779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.225 [2024-12-15 05:36:40.653199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.225 [2024-12-15 05:36:40.653216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.225 [2024-12-15 05:36:40.653223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.225 [2024-12-15 05:36:40.653391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.225 [2024-12-15 05:36:40.653559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.225 [2024-12-15 05:36:40.653568] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.225 [2024-12-15 05:36:40.653574] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.225 [2024-12-15 05:36:40.653580] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.225 [2024-12-15 05:36:40.665555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.225 [2024-12-15 05:36:40.665974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.225 [2024-12-15 05:36:40.666030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.225 [2024-12-15 05:36:40.666054] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.225 [2024-12-15 05:36:40.666637] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.225 [2024-12-15 05:36:40.667006] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.225 [2024-12-15 05:36:40.667014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.225 [2024-12-15 05:36:40.667020] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.225 [2024-12-15 05:36:40.667026] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.225 [2024-12-15 05:36:40.678387] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.225 [2024-12-15 05:36:40.678807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.225 [2024-12-15 05:36:40.678852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.225 [2024-12-15 05:36:40.678875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.225 [2024-12-15 05:36:40.679474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.225 [2024-12-15 05:36:40.679963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.225 [2024-12-15 05:36:40.679971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.225 [2024-12-15 05:36:40.679978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.225 [2024-12-15 05:36:40.679987] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.225 [2024-12-15 05:36:40.691118] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.225 [2024-12-15 05:36:40.691532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.225 [2024-12-15 05:36:40.691548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.225 [2024-12-15 05:36:40.691555] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.226 [2024-12-15 05:36:40.691713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.226 [2024-12-15 05:36:40.691872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.226 [2024-12-15 05:36:40.691880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.226 [2024-12-15 05:36:40.691886] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.226 [2024-12-15 05:36:40.691891] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.226 [2024-12-15 05:36:40.703896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.226 [2024-12-15 05:36:40.704261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.226 [2024-12-15 05:36:40.704278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.226 [2024-12-15 05:36:40.704285] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.226 [2024-12-15 05:36:40.704452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.226 [2024-12-15 05:36:40.704620] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.226 [2024-12-15 05:36:40.704627] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.226 [2024-12-15 05:36:40.704634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.226 [2024-12-15 05:36:40.704640] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.226 [2024-12-15 05:36:40.716685] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.226 [2024-12-15 05:36:40.717021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.226 [2024-12-15 05:36:40.717037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.226 [2024-12-15 05:36:40.717044] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.226 [2024-12-15 05:36:40.717203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.226 [2024-12-15 05:36:40.717361] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.226 [2024-12-15 05:36:40.717369] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.226 [2024-12-15 05:36:40.717375] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.226 [2024-12-15 05:36:40.717380] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.226 [2024-12-15 05:36:40.729627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.226 [2024-12-15 05:36:40.730053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.226 [2024-12-15 05:36:40.730098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.226 [2024-12-15 05:36:40.730122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.226 [2024-12-15 05:36:40.730705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.226 [2024-12-15 05:36:40.731183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.226 [2024-12-15 05:36:40.731192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.226 [2024-12-15 05:36:40.731198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.226 [2024-12-15 05:36:40.731205] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.226 [2024-12-15 05:36:40.742528] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.226 [2024-12-15 05:36:40.742854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.226 [2024-12-15 05:36:40.742870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.226 [2024-12-15 05:36:40.742878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.226 [2024-12-15 05:36:40.743068] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.226 [2024-12-15 05:36:40.743242] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.226 [2024-12-15 05:36:40.743251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.226 [2024-12-15 05:36:40.743257] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.226 [2024-12-15 05:36:40.743263] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.226 [2024-12-15 05:36:40.755317] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.226 [2024-12-15 05:36:40.755730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.226 [2024-12-15 05:36:40.755746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.226 [2024-12-15 05:36:40.755754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.226 [2024-12-15 05:36:40.755921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.226 [2024-12-15 05:36:40.756093] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.226 [2024-12-15 05:36:40.756102] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.226 [2024-12-15 05:36:40.756109] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.226 [2024-12-15 05:36:40.756115] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.226 [2024-12-15 05:36:40.768263] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.226 [2024-12-15 05:36:40.768696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.226 [2024-12-15 05:36:40.768739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.226 [2024-12-15 05:36:40.768770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.226 [2024-12-15 05:36:40.769250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.226 [2024-12-15 05:36:40.769419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.226 [2024-12-15 05:36:40.769427] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.226 [2024-12-15 05:36:40.769434] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.226 [2024-12-15 05:36:40.769440] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.226 [2024-12-15 05:36:40.781122] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.226 [2024-12-15 05:36:40.781562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.226 [2024-12-15 05:36:40.781579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.226 [2024-12-15 05:36:40.781586] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.226 [2024-12-15 05:36:40.781753] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.226 [2024-12-15 05:36:40.781922] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.226 [2024-12-15 05:36:40.781930] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.226 [2024-12-15 05:36:40.781936] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.226 [2024-12-15 05:36:40.781942] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.226 [2024-12-15 05:36:40.793943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.226 [2024-12-15 05:36:40.794379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.226 [2024-12-15 05:36:40.794395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.226 [2024-12-15 05:36:40.794403] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.226 [2024-12-15 05:36:40.794570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.226 [2024-12-15 05:36:40.794738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.226 [2024-12-15 05:36:40.794746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.226 [2024-12-15 05:36:40.794752] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.226 [2024-12-15 05:36:40.794758] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.226 [2024-12-15 05:36:40.806894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.226 [2024-12-15 05:36:40.807321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.226 [2024-12-15 05:36:40.807339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.226 [2024-12-15 05:36:40.807346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.226 [2024-12-15 05:36:40.807514] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.226 [2024-12-15 05:36:40.807693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.226 [2024-12-15 05:36:40.807702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.226 [2024-12-15 05:36:40.807708] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.226 [2024-12-15 05:36:40.807714] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.226 [2024-12-15 05:36:40.819624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.226 [2024-12-15 05:36:40.819966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.226 [2024-12-15 05:36:40.819982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.226 [2024-12-15 05:36:40.819990] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.226 [2024-12-15 05:36:40.820178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.227 [2024-12-15 05:36:40.820346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.227 [2024-12-15 05:36:40.820355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.227 [2024-12-15 05:36:40.820362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.227 [2024-12-15 05:36:40.820368] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.227 [2024-12-15 05:36:40.832352] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.227 [2024-12-15 05:36:40.832712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.227 [2024-12-15 05:36:40.832728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.227 [2024-12-15 05:36:40.832735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.227 [2024-12-15 05:36:40.832902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.227 [2024-12-15 05:36:40.833074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.227 [2024-12-15 05:36:40.833082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.227 [2024-12-15 05:36:40.833089] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.227 [2024-12-15 05:36:40.833095] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.227 [2024-12-15 05:36:40.845301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.227 [2024-12-15 05:36:40.845713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.227 [2024-12-15 05:36:40.845728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.227 [2024-12-15 05:36:40.845735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.227 [2024-12-15 05:36:40.845893] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.227 [2024-12-15 05:36:40.846077] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.227 [2024-12-15 05:36:40.846086] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.227 [2024-12-15 05:36:40.846092] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.227 [2024-12-15 05:36:40.846102] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.227 [2024-12-15 05:36:40.858044] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.227 [2024-12-15 05:36:40.858467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.227 [2024-12-15 05:36:40.858483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.227 [2024-12-15 05:36:40.858490] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.227 [2024-12-15 05:36:40.858649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.227 [2024-12-15 05:36:40.858808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.227 [2024-12-15 05:36:40.858816] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.227 [2024-12-15 05:36:40.858821] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.227 [2024-12-15 05:36:40.858827] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.227 [2024-12-15 05:36:40.870873] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.227 [2024-12-15 05:36:40.871321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.227 [2024-12-15 05:36:40.871366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.227 [2024-12-15 05:36:40.871389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.227 [2024-12-15 05:36:40.871972] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.227 [2024-12-15 05:36:40.872166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.227 [2024-12-15 05:36:40.872174] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.227 [2024-12-15 05:36:40.872180] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.227 [2024-12-15 05:36:40.872186] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.227 [2024-12-15 05:36:40.883626] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.227 [2024-12-15 05:36:40.884104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.227 [2024-12-15 05:36:40.884121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.227 [2024-12-15 05:36:40.884129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.227 [2024-12-15 05:36:40.884311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.227 [2024-12-15 05:36:40.884483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.227 [2024-12-15 05:36:40.884492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.227 [2024-12-15 05:36:40.884498] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.227 [2024-12-15 05:36:40.884504] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.227 7724.50 IOPS, 30.17 MiB/s [2024-12-15T04:36:40.914Z] [2024-12-15 05:36:40.896624] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.227 [2024-12-15 05:36:40.897063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.227 [2024-12-15 05:36:40.897109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.227 [2024-12-15 05:36:40.897132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.227 [2024-12-15 05:36:40.897402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.227 [2024-12-15 05:36:40.897575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.227 [2024-12-15 05:36:40.897584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.227 [2024-12-15 05:36:40.897590] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.227 [2024-12-15 05:36:40.897596] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.227 [2024-12-15 05:36:40.909520] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.227 [2024-12-15 05:36:40.909955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.227 [2024-12-15 05:36:40.909972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.227 [2024-12-15 05:36:40.909980] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.227 [2024-12-15 05:36:40.910159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.227 [2024-12-15 05:36:40.910332] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.227 [2024-12-15 05:36:40.910341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.227 [2024-12-15 05:36:40.910347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.227 [2024-12-15 05:36:40.910353] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.488 [2024-12-15 05:36:40.922353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.488 [2024-12-15 05:36:40.922792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.488 [2024-12-15 05:36:40.922808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.488 [2024-12-15 05:36:40.922815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.488 [2024-12-15 05:36:40.922983] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.488 [2024-12-15 05:36:40.923158] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.488 [2024-12-15 05:36:40.923167] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.488 [2024-12-15 05:36:40.923173] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.488 [2024-12-15 05:36:40.923179] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.488 [2024-12-15 05:36:40.935156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.488 [2024-12-15 05:36:40.935574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.488 [2024-12-15 05:36:40.935618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.488 [2024-12-15 05:36:40.935649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.488 [2024-12-15 05:36:40.936248] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.488 [2024-12-15 05:36:40.936767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.488 [2024-12-15 05:36:40.936784] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.488 [2024-12-15 05:36:40.936798] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.488 [2024-12-15 05:36:40.936811] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.488 [2024-12-15 05:36:40.950061] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.488 [2024-12-15 05:36:40.950576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.488 [2024-12-15 05:36:40.950625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.488 [2024-12-15 05:36:40.950649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.488 [2024-12-15 05:36:40.951249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.488 [2024-12-15 05:36:40.951747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.488 [2024-12-15 05:36:40.951759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.488 [2024-12-15 05:36:40.951769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.488 [2024-12-15 05:36:40.951778] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.488 [2024-12-15 05:36:40.963064] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.488 [2024-12-15 05:36:40.963484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.488 [2024-12-15 05:36:40.963501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.488 [2024-12-15 05:36:40.963508] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.488 [2024-12-15 05:36:40.963676] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.488 [2024-12-15 05:36:40.963843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.488 [2024-12-15 05:36:40.963852] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.488 [2024-12-15 05:36:40.963858] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.488 [2024-12-15 05:36:40.963864] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.488 [2024-12-15 05:36:40.975829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.488 [2024-12-15 05:36:40.976153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.488 [2024-12-15 05:36:40.976170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.488 [2024-12-15 05:36:40.976177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.488 [2024-12-15 05:36:40.976344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.488 [2024-12-15 05:36:40.976515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.488 [2024-12-15 05:36:40.976523] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.488 [2024-12-15 05:36:40.976529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.488 [2024-12-15 05:36:40.976535] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.488 [2024-12-15 05:36:40.988590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.488 [2024-12-15 05:36:40.988922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.488 [2024-12-15 05:36:40.988938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.488 [2024-12-15 05:36:40.988944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.488 [2024-12-15 05:36:40.989130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.488 [2024-12-15 05:36:40.989298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.488 [2024-12-15 05:36:40.989306] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.488 [2024-12-15 05:36:40.989312] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.488 [2024-12-15 05:36:40.989318] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.488 [2024-12-15 05:36:41.001396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.488 [2024-12-15 05:36:41.001797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.488 [2024-12-15 05:36:41.001842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.488 [2024-12-15 05:36:41.001866] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.488 [2024-12-15 05:36:41.002464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.488 [2024-12-15 05:36:41.002705] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.488 [2024-12-15 05:36:41.002713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.488 [2024-12-15 05:36:41.002720] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.488 [2024-12-15 05:36:41.002726] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.488 [2024-12-15 05:36:41.014164] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.488 [2024-12-15 05:36:41.014585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.488 [2024-12-15 05:36:41.014600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.488 [2024-12-15 05:36:41.014607] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.488 [2024-12-15 05:36:41.014765] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.488 [2024-12-15 05:36:41.014924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.488 [2024-12-15 05:36:41.014932] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.488 [2024-12-15 05:36:41.014941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.488 [2024-12-15 05:36:41.014947] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.488 [2024-12-15 05:36:41.026998] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.488 [2024-12-15 05:36:41.027417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.488 [2024-12-15 05:36:41.027448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.488 [2024-12-15 05:36:41.027456] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.488 [2024-12-15 05:36:41.028063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.488 [2024-12-15 05:36:41.028651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.488 [2024-12-15 05:36:41.028676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.488 [2024-12-15 05:36:41.028696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.488 [2024-12-15 05:36:41.028716] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.489 [2024-12-15 05:36:41.039739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.489 [2024-12-15 05:36:41.040173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.489 [2024-12-15 05:36:41.040219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.489 [2024-12-15 05:36:41.040242] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.489 [2024-12-15 05:36:41.040766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.489 [2024-12-15 05:36:41.040926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.489 [2024-12-15 05:36:41.040933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.489 [2024-12-15 05:36:41.040939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.489 [2024-12-15 05:36:41.040945] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.489 [2024-12-15 05:36:41.052516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.489 [2024-12-15 05:36:41.052950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.489 [2024-12-15 05:36:41.052967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.489 [2024-12-15 05:36:41.052974] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.489 [2024-12-15 05:36:41.053148] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.489 [2024-12-15 05:36:41.053316] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.489 [2024-12-15 05:36:41.053325] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.489 [2024-12-15 05:36:41.053330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.489 [2024-12-15 05:36:41.053336] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.489 [2024-12-15 05:36:41.065374] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.489 [2024-12-15 05:36:41.065766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.489 [2024-12-15 05:36:41.065781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.489 [2024-12-15 05:36:41.065788] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.489 [2024-12-15 05:36:41.065947] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.489 [2024-12-15 05:36:41.066134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.489 [2024-12-15 05:36:41.066143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.489 [2024-12-15 05:36:41.066149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.489 [2024-12-15 05:36:41.066155] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.489 [2024-12-15 05:36:41.078193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.489 [2024-12-15 05:36:41.078620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.489 [2024-12-15 05:36:41.078664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.489 [2024-12-15 05:36:41.078687] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.489 [2024-12-15 05:36:41.079284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.489 [2024-12-15 05:36:41.079820] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.489 [2024-12-15 05:36:41.079828] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.489 [2024-12-15 05:36:41.079834] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.489 [2024-12-15 05:36:41.079840] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.489 [2024-12-15 05:36:41.090981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.489 [2024-12-15 05:36:41.091402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.489 [2024-12-15 05:36:41.091418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.489 [2024-12-15 05:36:41.091425] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.489 [2024-12-15 05:36:41.091584] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.489 [2024-12-15 05:36:41.091742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.489 [2024-12-15 05:36:41.091749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.489 [2024-12-15 05:36:41.091755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.489 [2024-12-15 05:36:41.091760] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.489 [2024-12-15 05:36:41.103828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.489 [2024-12-15 05:36:41.104217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.489 [2024-12-15 05:36:41.104233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.489 [2024-12-15 05:36:41.104243] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.489 [2024-12-15 05:36:41.104411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.489 [2024-12-15 05:36:41.104579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.489 [2024-12-15 05:36:41.104587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.489 [2024-12-15 05:36:41.104593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.489 [2024-12-15 05:36:41.104599] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.489 [2024-12-15 05:36:41.116634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.489 [2024-12-15 05:36:41.117039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.489 [2024-12-15 05:36:41.117085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.489 [2024-12-15 05:36:41.117109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.489 [2024-12-15 05:36:41.117693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.489 [2024-12-15 05:36:41.118115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.489 [2024-12-15 05:36:41.118124] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.489 [2024-12-15 05:36:41.118131] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.489 [2024-12-15 05:36:41.118136] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.489 [2024-12-15 05:36:41.131727] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.489 [2024-12-15 05:36:41.132249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.489 [2024-12-15 05:36:41.132271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.489 [2024-12-15 05:36:41.132281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.489 [2024-12-15 05:36:41.132535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.489 [2024-12-15 05:36:41.132789] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.489 [2024-12-15 05:36:41.132800] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.489 [2024-12-15 05:36:41.132810] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.489 [2024-12-15 05:36:41.132818] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.489 [2024-12-15 05:36:41.144848] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.489 [2024-12-15 05:36:41.145254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.489 [2024-12-15 05:36:41.145271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.489 [2024-12-15 05:36:41.145278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.489 [2024-12-15 05:36:41.145450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.489 [2024-12-15 05:36:41.145626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.489 [2024-12-15 05:36:41.145635] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.489 [2024-12-15 05:36:41.145641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.489 [2024-12-15 05:36:41.145647] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.489 [2024-12-15 05:36:41.157871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.489 [2024-12-15 05:36:41.158309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.489 [2024-12-15 05:36:41.158325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.489 [2024-12-15 05:36:41.158333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.489 [2024-12-15 05:36:41.158501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.489 [2024-12-15 05:36:41.158668] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.489 [2024-12-15 05:36:41.158676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.489 [2024-12-15 05:36:41.158682] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.489 [2024-12-15 05:36:41.158688] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.489 [2024-12-15 05:36:41.170943] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.490 [2024-12-15 05:36:41.171308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.490 [2024-12-15 05:36:41.171325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.490 [2024-12-15 05:36:41.171333] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.490 [2024-12-15 05:36:41.171505] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.490 [2024-12-15 05:36:41.171678] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.490 [2024-12-15 05:36:41.171686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.490 [2024-12-15 05:36:41.171693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.490 [2024-12-15 05:36:41.171699] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.183694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.184037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.184053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.184060] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.184219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.184378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.184386] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.184395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.184401] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.196434] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.196845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.196897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.196920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.197517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.198117] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.198125] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.198132] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.198138] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.209279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.209684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.209730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.209755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.210198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.210367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.210375] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.210381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.210388] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.222133] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.222558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.222574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.222580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.222739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.222898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.222906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.222912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.222918] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.234900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.235252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.235295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.235318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.235899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.236208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.236217] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.236223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.236229] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.247710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.248055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.248101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.248125] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.248706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.249161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.249170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.249176] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.249182] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.260503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.260888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.260903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.260910] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.261094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.261262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.261270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.261277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.261284] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.273295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.273731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.273747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.273758] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.273925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.274099] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.274108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.274114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.274120] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.286018] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.286428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.286443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.286450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.286608] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.286767] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.286774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.286780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.286786] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.298830] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.299274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.299291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.299298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.299465] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.299632] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.299640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.299646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.299652] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.311700] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.312124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.312168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.312191] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.312774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.313359] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.313368] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.313374] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.313380] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.324544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.324893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.324937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.324960] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.325485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.325656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.325664] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.325671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.325677] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.337277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.337693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.337709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.337716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.337875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.338056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.338065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.338071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.338077] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.350042] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.350460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.350475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.350482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.350641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.350800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.350808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.350817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.350823] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.362859] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.363300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.363345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.363369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.363803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.363971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.363980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.363986] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.363998] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.375602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.375928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.375944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.375951] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.376136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.376303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.376311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.376318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.376324] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.388362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.388768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.750 [2024-12-15 05:36:41.388784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.750 [2024-12-15 05:36:41.388790] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.750 [2024-12-15 05:36:41.388949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.750 [2024-12-15 05:36:41.389134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.750 [2024-12-15 05:36:41.389143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.750 [2024-12-15 05:36:41.389150] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.750 [2024-12-15 05:36:41.389156] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.750 [2024-12-15 05:36:41.401191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.750 [2024-12-15 05:36:41.401609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.751 [2024-12-15 05:36:41.401624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.751 [2024-12-15 05:36:41.401632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.751 [2024-12-15 05:36:41.401799] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.751 [2024-12-15 05:36:41.401967] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.751 [2024-12-15 05:36:41.401975] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.751 [2024-12-15 05:36:41.401981] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.751 [2024-12-15 05:36:41.401987] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.751 [2024-12-15 05:36:41.414155] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.751 [2024-12-15 05:36:41.414580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.751 [2024-12-15 05:36:41.414625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.751 [2024-12-15 05:36:41.414648] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.751 [2024-12-15 05:36:41.415245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.751 [2024-12-15 05:36:41.415791] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.751 [2024-12-15 05:36:41.415799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.751 [2024-12-15 05:36:41.415805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.751 [2024-12-15 05:36:41.415811] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:27.751 [2024-12-15 05:36:41.427083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:27.751 [2024-12-15 05:36:41.427438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:27.751 [2024-12-15 05:36:41.427454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:27.751 [2024-12-15 05:36:41.427462] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:27.751 [2024-12-15 05:36:41.427629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:27.751 [2024-12-15 05:36:41.427797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:27.751 [2024-12-15 05:36:41.427805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:27.751 [2024-12-15 05:36:41.427811] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:27.751 [2024-12-15 05:36:41.427817] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.010 [2024-12-15 05:36:41.440087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.010 [2024-12-15 05:36:41.440457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.010 [2024-12-15 05:36:41.440500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.010 [2024-12-15 05:36:41.440539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.010 [2024-12-15 05:36:41.441133] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.010 [2024-12-15 05:36:41.441717] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.010 [2024-12-15 05:36:41.441726] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.010 [2024-12-15 05:36:41.441732] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.010 [2024-12-15 05:36:41.441738] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.010 [2024-12-15 05:36:41.452853] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.010 [2024-12-15 05:36:41.453235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.010 [2024-12-15 05:36:41.453280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.010 [2024-12-15 05:36:41.453304] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.010 [2024-12-15 05:36:41.453808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.010 [2024-12-15 05:36:41.453977] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.010 [2024-12-15 05:36:41.453986] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.010 [2024-12-15 05:36:41.454000] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.010 [2024-12-15 05:36:41.454007] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.010 [2024-12-15 05:36:41.465977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.010 [2024-12-15 05:36:41.466281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.010 [2024-12-15 05:36:41.466298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.010 [2024-12-15 05:36:41.466305] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.010 [2024-12-15 05:36:41.466478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.010 [2024-12-15 05:36:41.466653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.010 [2024-12-15 05:36:41.466661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.010 [2024-12-15 05:36:41.466667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.010 [2024-12-15 05:36:41.466674] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.010 [2024-12-15 05:36:41.478755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.010 [2024-12-15 05:36:41.479125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.010 [2024-12-15 05:36:41.479142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.010 [2024-12-15 05:36:41.479149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.010 [2024-12-15 05:36:41.479317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.010 [2024-12-15 05:36:41.479487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.010 [2024-12-15 05:36:41.479496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.010 [2024-12-15 05:36:41.479502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.010 [2024-12-15 05:36:41.479508] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.010 [2024-12-15 05:36:41.491561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.491897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.491913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.491920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.492093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.492262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.492270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.492276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.492282] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.504300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.504715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.504732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.504739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.504906] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.505079] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.505088] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.505094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.505100] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.517157] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.517474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.517517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.517539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.518080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.518248] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.518257] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.518266] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.518273] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.530120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.530413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.530429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.530436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.530604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.530772] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.530780] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.530786] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.530792] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.542862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.543193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.543209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.543216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.543383] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.543550] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.543559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.543565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.543570] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.555711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.556063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.556109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.556132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.556714] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.557225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.557234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.557241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.557246] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.568462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.568833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.568848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.568855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.569028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.569197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.569205] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.569212] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.569218] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.581319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.581620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.581636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.581644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.581811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.581979] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.581987] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.581999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.582006] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.594069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.594418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.594435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.594442] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.594609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.594777] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.594785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.594791] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.594797] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.606954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.607291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.607307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.607318] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.607486] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.607653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.607661] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.607667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.607673] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.619832] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.620163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.620179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.620187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.620355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.620522] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.620529] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.620535] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.620541] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.632733] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.633063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.633081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.633088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.633256] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.633425] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.633433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.633439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.633445] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.645563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.645984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.646007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.646015] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.646183] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.646354] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.646362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.646368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.646374] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.658384] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.658744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.658761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.658769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.658941] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.659120] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.659129] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.659135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.659142] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.671248] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.671680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.671696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.671703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.671875] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.672053] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.672062] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.672068] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.672075] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.011 [2024-12-15 05:36:41.684170] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.011 [2024-12-15 05:36:41.684528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.011 [2024-12-15 05:36:41.684544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.011 [2024-12-15 05:36:41.684552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.011 [2024-12-15 05:36:41.684724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.011 [2024-12-15 05:36:41.684896] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.011 [2024-12-15 05:36:41.684905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.011 [2024-12-15 05:36:41.684915] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.011 [2024-12-15 05:36:41.684921] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.272 [2024-12-15 05:36:41.697059] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.272 [2024-12-15 05:36:41.697435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.272 [2024-12-15 05:36:41.697451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.272 [2024-12-15 05:36:41.697459] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.272 [2024-12-15 05:36:41.697631] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.272 [2024-12-15 05:36:41.697803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.272 [2024-12-15 05:36:41.697812] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.272 [2024-12-15 05:36:41.697818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.272 [2024-12-15 05:36:41.697824] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.272 [2024-12-15 05:36:41.709914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.272 [2024-12-15 05:36:41.710273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.272 [2024-12-15 05:36:41.710318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.272 [2024-12-15 05:36:41.710342] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.272 [2024-12-15 05:36:41.710924] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.272 [2024-12-15 05:36:41.711443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.272 [2024-12-15 05:36:41.711452] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.272 [2024-12-15 05:36:41.711459] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.272 [2024-12-15 05:36:41.711465] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.272 [2024-12-15 05:36:41.722776] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.272 [2024-12-15 05:36:41.723133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.272 [2024-12-15 05:36:41.723150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.272 [2024-12-15 05:36:41.723157] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.272 [2024-12-15 05:36:41.723325] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.272 [2024-12-15 05:36:41.723493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.272 [2024-12-15 05:36:41.723501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.272 [2024-12-15 05:36:41.723507] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.272 [2024-12-15 05:36:41.723513] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.272 [2024-12-15 05:36:41.735525] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.272 [2024-12-15 05:36:41.735833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.272 [2024-12-15 05:36:41.735849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.272 [2024-12-15 05:36:41.735856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.272 [2024-12-15 05:36:41.736030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.272 [2024-12-15 05:36:41.736199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.272 [2024-12-15 05:36:41.736207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.272 [2024-12-15 05:36:41.736214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.272 [2024-12-15 05:36:41.736219] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.272 [2024-12-15 05:36:41.748430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.272 [2024-12-15 05:36:41.748908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.272 [2024-12-15 05:36:41.748951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.272 [2024-12-15 05:36:41.748975] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.272 [2024-12-15 05:36:41.749540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.272 [2024-12-15 05:36:41.749710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.272 [2024-12-15 05:36:41.749719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.272 [2024-12-15 05:36:41.749725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.272 [2024-12-15 05:36:41.749731] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.272 [2024-12-15 05:36:41.761286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.272 [2024-12-15 05:36:41.761728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.272 [2024-12-15 05:36:41.761772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.272 [2024-12-15 05:36:41.761795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.272 [2024-12-15 05:36:41.762349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.272 [2024-12-15 05:36:41.762518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.272 [2024-12-15 05:36:41.762526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.272 [2024-12-15 05:36:41.762532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.272 [2024-12-15 05:36:41.762538] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.272 [2024-12-15 05:36:41.774185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.272 [2024-12-15 05:36:41.774460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.272 [2024-12-15 05:36:41.774476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.272 [2024-12-15 05:36:41.774487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.272 [2024-12-15 05:36:41.774655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.272 [2024-12-15 05:36:41.774823] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.272 [2024-12-15 05:36:41.774831] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.272 [2024-12-15 05:36:41.774836] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.272 [2024-12-15 05:36:41.774842] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.272 [2024-12-15 05:36:41.786919] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.272 [2024-12-15 05:36:41.787317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.272 [2024-12-15 05:36:41.787334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.272 [2024-12-15 05:36:41.787341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.272 [2024-12-15 05:36:41.787509] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.272 [2024-12-15 05:36:41.787677] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.272 [2024-12-15 05:36:41.787685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.272 [2024-12-15 05:36:41.787691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.272 [2024-12-15 05:36:41.787697] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.273 [2024-12-15 05:36:41.799712] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.273 [2024-12-15 05:36:41.800081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.273 [2024-12-15 05:36:41.800099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.273 [2024-12-15 05:36:41.800107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.273 [2024-12-15 05:36:41.800279] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.273 [2024-12-15 05:36:41.800438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.273 [2024-12-15 05:36:41.800446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.273 [2024-12-15 05:36:41.800452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.273 [2024-12-15 05:36:41.800458] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.273 [2024-12-15 05:36:41.812545] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.273 [2024-12-15 05:36:41.812976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.273 [2024-12-15 05:36:41.812999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.273 [2024-12-15 05:36:41.813008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.273 [2024-12-15 05:36:41.813175] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.273 [2024-12-15 05:36:41.813346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.273 [2024-12-15 05:36:41.813355] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.273 [2024-12-15 05:36:41.813362] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.273 [2024-12-15 05:36:41.813368] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.273 [2024-12-15 05:36:41.825282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.273 [2024-12-15 05:36:41.825693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.273 [2024-12-15 05:36:41.825709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.273 [2024-12-15 05:36:41.825716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.273 [2024-12-15 05:36:41.825883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.273 [2024-12-15 05:36:41.826057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.273 [2024-12-15 05:36:41.826067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.273 [2024-12-15 05:36:41.826073] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.273 [2024-12-15 05:36:41.826079] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.273 [2024-12-15 05:36:41.838203] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.273 [2024-12-15 05:36:41.838528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.273 [2024-12-15 05:36:41.838544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.273 [2024-12-15 05:36:41.838551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.273 [2024-12-15 05:36:41.838718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.273 [2024-12-15 05:36:41.838886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.273 [2024-12-15 05:36:41.838895] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.273 [2024-12-15 05:36:41.838901] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.273 [2024-12-15 05:36:41.838907] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.273 [2024-12-15 05:36:41.851138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.273 [2024-12-15 05:36:41.851566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.273 [2024-12-15 05:36:41.851610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.273 [2024-12-15 05:36:41.851634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.273 [2024-12-15 05:36:41.852227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.273 [2024-12-15 05:36:41.852749] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.273 [2024-12-15 05:36:41.852758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.273 [2024-12-15 05:36:41.852768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.273 [2024-12-15 05:36:41.852775] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.273 [2024-12-15 05:36:41.864006] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.273 [2024-12-15 05:36:41.864420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.273 [2024-12-15 05:36:41.864437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.273 [2024-12-15 05:36:41.864444] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.273 [2024-12-15 05:36:41.864611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.273 [2024-12-15 05:36:41.864779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.273 [2024-12-15 05:36:41.864787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.273 [2024-12-15 05:36:41.864793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.273 [2024-12-15 05:36:41.864799] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.273 [2024-12-15 05:36:41.877120] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.273 [2024-12-15 05:36:41.877480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.273 [2024-12-15 05:36:41.877496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.273 [2024-12-15 05:36:41.877503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.273 [2024-12-15 05:36:41.877670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.273 [2024-12-15 05:36:41.877839] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.273 [2024-12-15 05:36:41.877847] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.273 [2024-12-15 05:36:41.877853] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.273 [2024-12-15 05:36:41.877859] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.273 6179.60 IOPS, 24.14 MiB/s [2024-12-15T04:36:41.960Z] [2024-12-15 05:36:41.891069] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.273 [2024-12-15 05:36:41.891505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.273 [2024-12-15 05:36:41.891521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.273 [2024-12-15 05:36:41.891528] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.273 [2024-12-15 05:36:41.891696] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.273 [2024-12-15 05:36:41.891864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.273 [2024-12-15 05:36:41.891872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.273 [2024-12-15 05:36:41.891878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.273 [2024-12-15 05:36:41.891884] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.273 [2024-12-15 05:36:41.903865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.273 [2024-12-15 05:36:41.904283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.273 [2024-12-15 05:36:41.904299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.273 [2024-12-15 05:36:41.904329] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.273 [2024-12-15 05:36:41.904877] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.273 [2024-12-15 05:36:41.905052] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.273 [2024-12-15 05:36:41.905061] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.273 [2024-12-15 05:36:41.905067] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.273 [2024-12-15 05:36:41.905074] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.273 [2024-12-15 05:36:41.916664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.273 [2024-12-15 05:36:41.917103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.273 [2024-12-15 05:36:41.917121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.273 [2024-12-15 05:36:41.917128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.273 [2024-12-15 05:36:41.917295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.273 [2024-12-15 05:36:41.917463] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.273 [2024-12-15 05:36:41.917471] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.273 [2024-12-15 05:36:41.917477] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.273 [2024-12-15 05:36:41.917483] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.273 [2024-12-15 05:36:41.929709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.274 [2024-12-15 05:36:41.930115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.274 [2024-12-15 05:36:41.930133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.274 [2024-12-15 05:36:41.930141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.274 [2024-12-15 05:36:41.930322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.274 [2024-12-15 05:36:41.930490] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.274 [2024-12-15 05:36:41.930498] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.274 [2024-12-15 05:36:41.930504] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.274 [2024-12-15 05:36:41.930510] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.274 [2024-12-15 05:36:41.942540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.274 [2024-12-15 05:36:41.942960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.274 [2024-12-15 05:36:41.943016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.274 [2024-12-15 05:36:41.943050] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.274 [2024-12-15 05:36:41.943633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.274 [2024-12-15 05:36:41.943864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.274 [2024-12-15 05:36:41.943872] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.274 [2024-12-15 05:36:41.943878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.274 [2024-12-15 05:36:41.943884] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.274 [2024-12-15 05:36:41.955501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.274 [2024-12-15 05:36:41.955898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.274 [2024-12-15 05:36:41.955915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.274 [2024-12-15 05:36:41.955922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.274 [2024-12-15 05:36:41.956097] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.274 [2024-12-15 05:36:41.956266] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.274 [2024-12-15 05:36:41.956275] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.274 [2024-12-15 05:36:41.956281] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.274 [2024-12-15 05:36:41.956287] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.535 [2024-12-15 05:36:41.968359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.535 [2024-12-15 05:36:41.968771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.535 [2024-12-15 05:36:41.968788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.535 [2024-12-15 05:36:41.968795] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.535 [2024-12-15 05:36:41.968963] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.535 [2024-12-15 05:36:41.969138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.535 [2024-12-15 05:36:41.969146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.535 [2024-12-15 05:36:41.969152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.535 [2024-12-15 05:36:41.969159] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.535 [2024-12-15 05:36:41.981089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.535 [2024-12-15 05:36:41.981500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.535 [2024-12-15 05:36:41.981516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.535 [2024-12-15 05:36:41.981523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.535 [2024-12-15 05:36:41.981690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.535 [2024-12-15 05:36:41.981861] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.535 [2024-12-15 05:36:41.981870] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.535 [2024-12-15 05:36:41.981876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.535 [2024-12-15 05:36:41.981882] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.535 [2024-12-15 05:36:41.993918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.535 [2024-12-15 05:36:41.994331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.535 [2024-12-15 05:36:41.994348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.535 [2024-12-15 05:36:41.994355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.535 [2024-12-15 05:36:41.994523] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.535 [2024-12-15 05:36:41.994690] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.535 [2024-12-15 05:36:41.994698] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.535 [2024-12-15 05:36:41.994704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.535 [2024-12-15 05:36:41.994710] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.535 [2024-12-15 05:36:42.006641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.535 [2024-12-15 05:36:42.007006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.535 [2024-12-15 05:36:42.007022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.535 [2024-12-15 05:36:42.007029] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.535 [2024-12-15 05:36:42.007188] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.535 [2024-12-15 05:36:42.007346] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.535 [2024-12-15 05:36:42.007354] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.535 [2024-12-15 05:36:42.007360] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.535 [2024-12-15 05:36:42.007366] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.535 [2024-12-15 05:36:42.019521] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.535 [2024-12-15 05:36:42.019936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.535 [2024-12-15 05:36:42.019952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.535 [2024-12-15 05:36:42.019959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.535 [2024-12-15 05:36:42.020136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.535 [2024-12-15 05:36:42.020305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.535 [2024-12-15 05:36:42.020313] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.535 [2024-12-15 05:36:42.020323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.535 [2024-12-15 05:36:42.020329] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.535 [2024-12-15 05:36:42.032358] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.535 [2024-12-15 05:36:42.032773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.535 [2024-12-15 05:36:42.032817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.535 [2024-12-15 05:36:42.032841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.535 [2024-12-15 05:36:42.033297] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.535 [2024-12-15 05:36:42.033467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.535 [2024-12-15 05:36:42.033475] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.535 [2024-12-15 05:36:42.033481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.535 [2024-12-15 05:36:42.033487] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.535 [2024-12-15 05:36:42.045117] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.535 [2024-12-15 05:36:42.045532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.535 [2024-12-15 05:36:42.045577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.535 [2024-12-15 05:36:42.045600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.535 [2024-12-15 05:36:42.046120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.535 [2024-12-15 05:36:42.046289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.535 [2024-12-15 05:36:42.046298] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.535 [2024-12-15 05:36:42.046303] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.535 [2024-12-15 05:36:42.046309] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.535 [2024-12-15 05:36:42.057900] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.535 [2024-12-15 05:36:42.058293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.535 [2024-12-15 05:36:42.058310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.535 [2024-12-15 05:36:42.058317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.535 [2024-12-15 05:36:42.058484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.535 [2024-12-15 05:36:42.058652] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.535 [2024-12-15 05:36:42.058660] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.535 [2024-12-15 05:36:42.058667] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.535 [2024-12-15 05:36:42.058672] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.535 [2024-12-15 05:36:42.070696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.535 [2024-12-15 05:36:42.071097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.535 [2024-12-15 05:36:42.071142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.535 [2024-12-15 05:36:42.071165] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.535 [2024-12-15 05:36:42.071747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.535 [2024-12-15 05:36:42.072133] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.535 [2024-12-15 05:36:42.072142] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.535 [2024-12-15 05:36:42.072148] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.536 [2024-12-15 05:36:42.072154] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.536 [2024-12-15 05:36:42.083557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.536 [2024-12-15 05:36:42.083947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.536 [2024-12-15 05:36:42.083962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.536 [2024-12-15 05:36:42.083969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.536 [2024-12-15 05:36:42.084156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.536 [2024-12-15 05:36:42.084324] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.536 [2024-12-15 05:36:42.084332] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.536 [2024-12-15 05:36:42.084338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.536 [2024-12-15 05:36:42.084344] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.536 [2024-12-15 05:36:42.096401] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.536 [2024-12-15 05:36:42.096827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.536 [2024-12-15 05:36:42.096871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.536 [2024-12-15 05:36:42.096894] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.536 [2024-12-15 05:36:42.097413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.536 [2024-12-15 05:36:42.097583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.536 [2024-12-15 05:36:42.097591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.536 [2024-12-15 05:36:42.097597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.536 [2024-12-15 05:36:42.097603] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.536 [2024-12-15 05:36:42.109191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.536 [2024-12-15 05:36:42.109530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.536 [2024-12-15 05:36:42.109547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.536 [2024-12-15 05:36:42.109557] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.536 [2024-12-15 05:36:42.109725] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.536 [2024-12-15 05:36:42.109893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.536 [2024-12-15 05:36:42.109901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.536 [2024-12-15 05:36:42.109907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.536 [2024-12-15 05:36:42.109913] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.536 [2024-12-15 05:36:42.121940] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.536 [2024-12-15 05:36:42.122328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.536 [2024-12-15 05:36:42.122344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.536 [2024-12-15 05:36:42.122351] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.536 [2024-12-15 05:36:42.122508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.536 [2024-12-15 05:36:42.122667] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.536 [2024-12-15 05:36:42.122675] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.536 [2024-12-15 05:36:42.122681] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.536 [2024-12-15 05:36:42.122687] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.536 [2024-12-15 05:36:42.134669] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.536 [2024-12-15 05:36:42.135081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.536 [2024-12-15 05:36:42.135097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.536 [2024-12-15 05:36:42.135104] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.536 [2024-12-15 05:36:42.135275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.536 [2024-12-15 05:36:42.135434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.536 [2024-12-15 05:36:42.135442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.536 [2024-12-15 05:36:42.135448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.536 [2024-12-15 05:36:42.135454] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.536 [2024-12-15 05:36:42.147483] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.536 [2024-12-15 05:36:42.147886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.536 [2024-12-15 05:36:42.147929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.536 [2024-12-15 05:36:42.147952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.536 [2024-12-15 05:36:42.148539] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.536 [2024-12-15 05:36:42.148711] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.536 [2024-12-15 05:36:42.148719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.536 [2024-12-15 05:36:42.148725] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.536 [2024-12-15 05:36:42.148731] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.536 [2024-12-15 05:36:42.160438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.536 [2024-12-15 05:36:42.160768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.536 [2024-12-15 05:36:42.160784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.536 [2024-12-15 05:36:42.160791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.536 [2024-12-15 05:36:42.160959] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.536 [2024-12-15 05:36:42.161134] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.536 [2024-12-15 05:36:42.161143] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.536 [2024-12-15 05:36:42.161149] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.536 [2024-12-15 05:36:42.161155] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.536 [2024-12-15 05:36:42.173177] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.536 [2024-12-15 05:36:42.173601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.536 [2024-12-15 05:36:42.173645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.536 [2024-12-15 05:36:42.173667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.536 [2024-12-15 05:36:42.174264] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.536 [2024-12-15 05:36:42.174715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.536 [2024-12-15 05:36:42.174724] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.536 [2024-12-15 05:36:42.174730] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.536 [2024-12-15 05:36:42.174737] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.536 [2024-12-15 05:36:42.186070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.536 [2024-12-15 05:36:42.186486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.536 [2024-12-15 05:36:42.186503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.536 [2024-12-15 05:36:42.186510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.536 [2024-12-15 05:36:42.186683] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.536 [2024-12-15 05:36:42.186855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.536 [2024-12-15 05:36:42.186864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.536 [2024-12-15 05:36:42.186873] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.536 [2024-12-15 05:36:42.186880] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.536 [2024-12-15 05:36:42.198951] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.536 [2024-12-15 05:36:42.199369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.536 [2024-12-15 05:36:42.199386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.536 [2024-12-15 05:36:42.199394] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.536 [2024-12-15 05:36:42.199562] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.536 [2024-12-15 05:36:42.199729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.536 [2024-12-15 05:36:42.199737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.536 [2024-12-15 05:36:42.199743] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.536 [2024-12-15 05:36:42.199750] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.536 [2024-12-15 05:36:42.211714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.536 [2024-12-15 05:36:42.212100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.536 [2024-12-15 05:36:42.212117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.536 [2024-12-15 05:36:42.212124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.536 [2024-12-15 05:36:42.212283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.536 [2024-12-15 05:36:42.212443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.536 [2024-12-15 05:36:42.212450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.536 [2024-12-15 05:36:42.212456] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.536 [2024-12-15 05:36:42.212462] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.797 [2024-12-15 05:36:42.224652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.797 [2024-12-15 05:36:42.225068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.797 [2024-12-15 05:36:42.225086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.797 [2024-12-15 05:36:42.225094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.797 [2024-12-15 05:36:42.225268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.797 [2024-12-15 05:36:42.225441] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.797 [2024-12-15 05:36:42.225450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.797 [2024-12-15 05:36:42.225457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.797 [2024-12-15 05:36:42.225463] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.797 [2024-12-15 05:36:42.237551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.797 [2024-12-15 05:36:42.237946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.797 [2024-12-15 05:36:42.237962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.797 [2024-12-15 05:36:42.237969] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.797 [2024-12-15 05:36:42.238150] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.797 [2024-12-15 05:36:42.238318] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.797 [2024-12-15 05:36:42.238326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.797 [2024-12-15 05:36:42.238333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.797 [2024-12-15 05:36:42.238339] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.797 [2024-12-15 05:36:42.250362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.797 [2024-12-15 05:36:42.250750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.797 [2024-12-15 05:36:42.250765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.797 [2024-12-15 05:36:42.250772] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.797 [2024-12-15 05:36:42.250931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.797 [2024-12-15 05:36:42.251118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.797 [2024-12-15 05:36:42.251127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.797 [2024-12-15 05:36:42.251133] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.797 [2024-12-15 05:36:42.251139] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.797 [2024-12-15 05:36:42.263255] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.797 [2024-12-15 05:36:42.263652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.797 [2024-12-15 05:36:42.263667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.797 [2024-12-15 05:36:42.263674] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.797 [2024-12-15 05:36:42.263832] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.797 [2024-12-15 05:36:42.263998] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.797 [2024-12-15 05:36:42.264006] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.797 [2024-12-15 05:36:42.264013] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.797 [2024-12-15 05:36:42.264019] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.797 [2024-12-15 05:36:42.275996] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.797 [2024-12-15 05:36:42.276421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.797 [2024-12-15 05:36:42.276466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.797 [2024-12-15 05:36:42.276498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.797 [2024-12-15 05:36:42.276986] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.797 [2024-12-15 05:36:42.277163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.797 [2024-12-15 05:36:42.277171] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.797 [2024-12-15 05:36:42.277177] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.797 [2024-12-15 05:36:42.277183] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.797 [2024-12-15 05:36:42.288782] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.797 [2024-12-15 05:36:42.289094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.798 [2024-12-15 05:36:42.289110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.798 [2024-12-15 05:36:42.289117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.798 [2024-12-15 05:36:42.289276] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.798 [2024-12-15 05:36:42.289434] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.798 [2024-12-15 05:36:42.289442] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.798 [2024-12-15 05:36:42.289448] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.798 [2024-12-15 05:36:42.289454] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.798 [2024-12-15 05:36:42.301562] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.798 [2024-12-15 05:36:42.301950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.798 [2024-12-15 05:36:42.301966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.798 [2024-12-15 05:36:42.301973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.798 [2024-12-15 05:36:42.302161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.798 [2024-12-15 05:36:42.302329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.798 [2024-12-15 05:36:42.302337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.798 [2024-12-15 05:36:42.302343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.798 [2024-12-15 05:36:42.302349] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.798 [2024-12-15 05:36:42.314392] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.798 [2024-12-15 05:36:42.314824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.798 [2024-12-15 05:36:42.314869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.798 [2024-12-15 05:36:42.314892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.798 [2024-12-15 05:36:42.315488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.798 [2024-12-15 05:36:42.315970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.798 [2024-12-15 05:36:42.315978] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.798 [2024-12-15 05:36:42.315985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.798 [2024-12-15 05:36:42.315994] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.798 [2024-12-15 05:36:42.327136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.798 [2024-12-15 05:36:42.327525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.798 [2024-12-15 05:36:42.327541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.798 [2024-12-15 05:36:42.327547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.798 [2024-12-15 05:36:42.327706] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.798 [2024-12-15 05:36:42.327865] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.798 [2024-12-15 05:36:42.327873] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.798 [2024-12-15 05:36:42.327878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.798 [2024-12-15 05:36:42.327884] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.798 [2024-12-15 05:36:42.339999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.798 [2024-12-15 05:36:42.340402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.798 [2024-12-15 05:36:42.340446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.798 [2024-12-15 05:36:42.340469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.798 [2024-12-15 05:36:42.340998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.798 [2024-12-15 05:36:42.341168] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.798 [2024-12-15 05:36:42.341176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.798 [2024-12-15 05:36:42.341182] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.798 [2024-12-15 05:36:42.341188] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.798 [2024-12-15 05:36:42.352769] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.798 [2024-12-15 05:36:42.353136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.798 [2024-12-15 05:36:42.353152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.798 [2024-12-15 05:36:42.353159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.798 [2024-12-15 05:36:42.353318] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.798 [2024-12-15 05:36:42.353476] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.798 [2024-12-15 05:36:42.353484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.798 [2024-12-15 05:36:42.353493] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.798 [2024-12-15 05:36:42.353499] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.798 [2024-12-15 05:36:42.365617] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.798 [2024-12-15 05:36:42.366050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.798 [2024-12-15 05:36:42.366095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.798 [2024-12-15 05:36:42.366117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.798 [2024-12-15 05:36:42.366700] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.798 [2024-12-15 05:36:42.367100] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.798 [2024-12-15 05:36:42.367109] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.798 [2024-12-15 05:36:42.367115] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.798 [2024-12-15 05:36:42.367121] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.798 [2024-12-15 05:36:42.378450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.798 [2024-12-15 05:36:42.378835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.798 [2024-12-15 05:36:42.378850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.798 [2024-12-15 05:36:42.378857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.798 [2024-12-15 05:36:42.379039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.798 [2024-12-15 05:36:42.379207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.798 [2024-12-15 05:36:42.379215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.798 [2024-12-15 05:36:42.379221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.798 [2024-12-15 05:36:42.379228] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.798 [2024-12-15 05:36:42.391250] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.798 [2024-12-15 05:36:42.391680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.798 [2024-12-15 05:36:42.391724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.798 [2024-12-15 05:36:42.391747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.798 [2024-12-15 05:36:42.392251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.798 [2024-12-15 05:36:42.392421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.798 [2024-12-15 05:36:42.392428] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.798 [2024-12-15 05:36:42.392435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.798 [2024-12-15 05:36:42.392441] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.798 [2024-12-15 05:36:42.404022] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.798 [2024-12-15 05:36:42.404412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.798 [2024-12-15 05:36:42.404427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.798 [2024-12-15 05:36:42.404434] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.798 [2024-12-15 05:36:42.404592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.798 [2024-12-15 05:36:42.404752] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.798 [2024-12-15 05:36:42.404760] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.798 [2024-12-15 05:36:42.404765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.798 [2024-12-15 05:36:42.404771] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.798 [2024-12-15 05:36:42.416880] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.798 [2024-12-15 05:36:42.417263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.798 [2024-12-15 05:36:42.417279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.799 [2024-12-15 05:36:42.417286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.799 [2024-12-15 05:36:42.417453] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.799 [2024-12-15 05:36:42.417621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.799 [2024-12-15 05:36:42.417629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.799 [2024-12-15 05:36:42.417635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.799 [2024-12-15 05:36:42.417641] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.799 [2024-12-15 05:36:42.429724] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.799 [2024-12-15 05:36:42.430066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.799 [2024-12-15 05:36:42.430083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.799 [2024-12-15 05:36:42.430091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.799 [2024-12-15 05:36:42.430263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.799 [2024-12-15 05:36:42.430436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.799 [2024-12-15 05:36:42.430445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.799 [2024-12-15 05:36:42.430451] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.799 [2024-12-15 05:36:42.430457] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.799 [2024-12-15 05:36:42.442653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.799 [2024-12-15 05:36:42.443075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.799 [2024-12-15 05:36:42.443092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.799 [2024-12-15 05:36:42.443103] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.799 [2024-12-15 05:36:42.443283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.799 [2024-12-15 05:36:42.443451] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.799 [2024-12-15 05:36:42.443460] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.799 [2024-12-15 05:36:42.443466] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.799 [2024-12-15 05:36:42.443472] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.799 [2024-12-15 05:36:42.455613] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.799 [2024-12-15 05:36:42.456032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.799 [2024-12-15 05:36:42.456076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.799 [2024-12-15 05:36:42.456099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.799 [2024-12-15 05:36:42.456681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.799 [2024-12-15 05:36:42.457007] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.799 [2024-12-15 05:36:42.457015] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.799 [2024-12-15 05:36:42.457022] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.799 [2024-12-15 05:36:42.457028] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.799 [2024-12-15 05:36:42.468403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.799 [2024-12-15 05:36:42.468797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.799 [2024-12-15 05:36:42.468813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.799 [2024-12-15 05:36:42.468820] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.799 [2024-12-15 05:36:42.468978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.799 [2024-12-15 05:36:42.469166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.799 [2024-12-15 05:36:42.469175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.799 [2024-12-15 05:36:42.469181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.799 [2024-12-15 05:36:42.469187] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:28.799 [2024-12-15 05:36:42.481369] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:28.799 [2024-12-15 05:36:42.481659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:28.799 [2024-12-15 05:36:42.481675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:28.799 [2024-12-15 05:36:42.481682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:28.799 [2024-12-15 05:36:42.481850] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:28.799 [2024-12-15 05:36:42.482027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:28.799 [2024-12-15 05:36:42.482036] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:28.799 [2024-12-15 05:36:42.482043] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:28.799 [2024-12-15 05:36:42.482049] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.060 [2024-12-15 05:36:42.494215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.060 [2024-12-15 05:36:42.494626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.060 [2024-12-15 05:36:42.494642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.060 [2024-12-15 05:36:42.494649] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.060 [2024-12-15 05:36:42.494817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.060 [2024-12-15 05:36:42.494984] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.060 [2024-12-15 05:36:42.494999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.060 [2024-12-15 05:36:42.495005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.060 [2024-12-15 05:36:42.495011] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.060 [2024-12-15 05:36:42.506988] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.060 [2024-12-15 05:36:42.507375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.060 [2024-12-15 05:36:42.507411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.060 [2024-12-15 05:36:42.507436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.060 [2024-12-15 05:36:42.508034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.060 [2024-12-15 05:36:42.508286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.060 [2024-12-15 05:36:42.508294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.060 [2024-12-15 05:36:42.508300] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.060 [2024-12-15 05:36:42.508306] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.060 [2024-12-15 05:36:42.519835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.060 [2024-12-15 05:36:42.520249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.060 [2024-12-15 05:36:42.520265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.060 [2024-12-15 05:36:42.520272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.060 [2024-12-15 05:36:42.520439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.060 [2024-12-15 05:36:42.520607] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.060 [2024-12-15 05:36:42.520615] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.060 [2024-12-15 05:36:42.520624] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.060 [2024-12-15 05:36:42.520631] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.060 [2024-12-15 05:36:42.532675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.060 [2024-12-15 05:36:42.533084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.060 [2024-12-15 05:36:42.533130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.060 [2024-12-15 05:36:42.533153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.060 [2024-12-15 05:36:42.533654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.060 [2024-12-15 05:36:42.533814] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.060 [2024-12-15 05:36:42.533822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.060 [2024-12-15 05:36:42.533828] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.060 [2024-12-15 05:36:42.533833] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.060 [2024-12-15 05:36:42.545444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.060 [2024-12-15 05:36:42.545780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.060 [2024-12-15 05:36:42.545796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.060 [2024-12-15 05:36:42.545803] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.060 [2024-12-15 05:36:42.545970] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.060 [2024-12-15 05:36:42.546145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.060 [2024-12-15 05:36:42.546153] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.060 [2024-12-15 05:36:42.546160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.060 [2024-12-15 05:36:42.546166] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.060 [2024-12-15 05:36:42.558300] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.060 [2024-12-15 05:36:42.558696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.060 [2024-12-15 05:36:42.558712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.060 [2024-12-15 05:36:42.558719] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.061 [2024-12-15 05:36:42.558886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.061 [2024-12-15 05:36:42.559061] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.061 [2024-12-15 05:36:42.559070] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.061 [2024-12-15 05:36:42.559076] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.061 [2024-12-15 05:36:42.559083] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.061 [2024-12-15 05:36:42.571119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.061 [2024-12-15 05:36:42.571517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.061 [2024-12-15 05:36:42.571561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.061 [2024-12-15 05:36:42.571585] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.061 [2024-12-15 05:36:42.572184] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.061 [2024-12-15 05:36:42.572669] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.061 [2024-12-15 05:36:42.572676] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.061 [2024-12-15 05:36:42.572683] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.061 [2024-12-15 05:36:42.572689] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.061 [2024-12-15 05:36:42.583932] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.061 [2024-12-15 05:36:42.584295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.061 [2024-12-15 05:36:42.584313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.061 [2024-12-15 05:36:42.584320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.061 [2024-12-15 05:36:42.584487] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.061 [2024-12-15 05:36:42.584655] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.061 [2024-12-15 05:36:42.584663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.061 [2024-12-15 05:36:42.584669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.061 [2024-12-15 05:36:42.584675] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.061 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 536071 Killed "${NVMF_APP[@]}" "$@" 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.061 [2024-12-15 05:36:42.596927] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.061 [2024-12-15 05:36:42.597332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.061 [2024-12-15 05:36:42.597350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.061 [2024-12-15 05:36:42.597358] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.061 [2024-12-15 05:36:42.597531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.061 [2024-12-15 05:36:42.597704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.061 [2024-12-15 05:36:42.597713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.061 [2024-12-15 05:36:42.597719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.061 [2024-12-15 05:36:42.597729] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@509 -- # nvmfpid=537287 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@510 -- # waitforlisten 537287 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 537287 ']' 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:29.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:29.061 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.061 [2024-12-15 05:36:42.609942] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.061 [2024-12-15 05:36:42.610362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.061 [2024-12-15 05:36:42.610379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.061 [2024-12-15 05:36:42.610387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.061 [2024-12-15 05:36:42.610559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.061 [2024-12-15 05:36:42.610732] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.061 [2024-12-15 05:36:42.610741] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.061 [2024-12-15 05:36:42.610747] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.061 [2024-12-15 05:36:42.610753] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.061 [2024-12-15 05:36:42.622890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.061 [2024-12-15 05:36:42.623293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.061 [2024-12-15 05:36:42.623308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.061 [2024-12-15 05:36:42.623316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.061 [2024-12-15 05:36:42.623488] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.061 [2024-12-15 05:36:42.623660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.061 [2024-12-15 05:36:42.623669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.061 [2024-12-15 05:36:42.623675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.061 [2024-12-15 05:36:42.623681] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.061 [2024-12-15 05:36:42.635866] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.061 [2024-12-15 05:36:42.636312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.061 [2024-12-15 05:36:42.636332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.061 [2024-12-15 05:36:42.636339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.061 [2024-12-15 05:36:42.636507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.061 [2024-12-15 05:36:42.636676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.061 [2024-12-15 05:36:42.636685] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.061 [2024-12-15 05:36:42.636692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.061 [2024-12-15 05:36:42.636699] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.061 [2024-12-15 05:36:42.648750] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.061 [2024-12-15 05:36:42.649083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.061 [2024-12-15 05:36:42.649100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.061 [2024-12-15 05:36:42.649107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.061 [2024-12-15 05:36:42.649275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.061 [2024-12-15 05:36:42.649443] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.061 [2024-12-15 05:36:42.649451] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.061 [2024-12-15 05:36:42.649457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.061 [2024-12-15 05:36:42.649463] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.061 [2024-12-15 05:36:42.650396] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:29.061 [2024-12-15 05:36:42.650447] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:29.061 [2024-12-15 05:36:42.661721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.061 [2024-12-15 05:36:42.662170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.061 [2024-12-15 05:36:42.662217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.061 [2024-12-15 05:36:42.662241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.061 [2024-12-15 05:36:42.662677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.061 [2024-12-15 05:36:42.662847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.061 [2024-12-15 05:36:42.662856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.061 [2024-12-15 05:36:42.662863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.062 [2024-12-15 05:36:42.662870] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.062 [2024-12-15 05:36:42.674568] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.062 [2024-12-15 05:36:42.674926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.062 [2024-12-15 05:36:42.674949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.062 [2024-12-15 05:36:42.674957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.062 [2024-12-15 05:36:42.675131] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.062 [2024-12-15 05:36:42.675301] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.062 [2024-12-15 05:36:42.675309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.062 [2024-12-15 05:36:42.675316] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.062 [2024-12-15 05:36:42.675322] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.062 [2024-12-15 05:36:42.687495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.062 [2024-12-15 05:36:42.687934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.062 [2024-12-15 05:36:42.687951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.062 [2024-12-15 05:36:42.687959] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.062 [2024-12-15 05:36:42.688136] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.062 [2024-12-15 05:36:42.688309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.062 [2024-12-15 05:36:42.688318] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.062 [2024-12-15 05:36:42.688324] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.062 [2024-12-15 05:36:42.688330] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.062 [2024-12-15 05:36:42.700571] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.062 [2024-12-15 05:36:42.700900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.062 [2024-12-15 05:36:42.700917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.062 [2024-12-15 05:36:42.700925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.062 [2024-12-15 05:36:42.701102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.062 [2024-12-15 05:36:42.701275] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.062 [2024-12-15 05:36:42.701284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.062 [2024-12-15 05:36:42.701290] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.062 [2024-12-15 05:36:42.701296] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.062 [2024-12-15 05:36:42.713584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.062 [2024-12-15 05:36:42.714030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.062 [2024-12-15 05:36:42.714048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.062 [2024-12-15 05:36:42.714056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.062 [2024-12-15 05:36:42.714232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.062 [2024-12-15 05:36:42.714405] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.062 [2024-12-15 05:36:42.714414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.062 [2024-12-15 05:36:42.714420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.062 [2024-12-15 05:36:42.714426] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.062 [2024-12-15 05:36:42.726482] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.062 [2024-12-15 05:36:42.726836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.062 [2024-12-15 05:36:42.726852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.062 [2024-12-15 05:36:42.726861] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.062 [2024-12-15 05:36:42.727034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.062 [2024-12-15 05:36:42.727203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.062 [2024-12-15 05:36:42.727212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.062 [2024-12-15 05:36:42.727218] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.062 [2024-12-15 05:36:42.727224] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.062 [2024-12-15 05:36:42.729460] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:36:29.062 [2024-12-15 05:36:42.739495] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.062 [2024-12-15 05:36:42.739936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.062 [2024-12-15 05:36:42.739956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.062 [2024-12-15 05:36:42.739965] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.062 [2024-12-15 05:36:42.740146] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.062 [2024-12-15 05:36:42.740322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.062 [2024-12-15 05:36:42.740331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.062 [2024-12-15 05:36:42.740338] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.062 [2024-12-15 05:36:42.740347] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.323 [2024-12-15 05:36:42.751145] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:29.323 [2024-12-15 05:36:42.751177] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:29.323 [2024-12-15 05:36:42.751184] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:29.323 [2024-12-15 05:36:42.751190] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:29.323 [2024-12-15 05:36:42.751195] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:29.323 [2024-12-15 05:36:42.752380] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:36:29.323 [2024-12-15 05:36:42.752488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:36:29.323 [2024-12-15 05:36:42.752591] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.323 [2024-12-15 05:36:42.752489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:36:29.323 [2024-12-15 05:36:42.752958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.323 [2024-12-15 05:36:42.752978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.323 [2024-12-15 05:36:42.752987] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.323 [2024-12-15 05:36:42.753168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.323 [2024-12-15 05:36:42.753343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.323 [2024-12-15 05:36:42.753353] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.323 [2024-12-15 05:36:42.753361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.323 [2024-12-15 05:36:42.753369] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.323 [2024-12-15 05:36:42.765581] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.323 [2024-12-15 05:36:42.766019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.323 [2024-12-15 05:36:42.766042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.323 [2024-12-15 05:36:42.766051] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.323 [2024-12-15 05:36:42.766225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.323 [2024-12-15 05:36:42.766400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.323 [2024-12-15 05:36:42.766410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.323 [2024-12-15 05:36:42.766418] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.323 [2024-12-15 05:36:42.766425] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.323 [2024-12-15 05:36:42.778644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.323 [2024-12-15 05:36:42.779076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.323 [2024-12-15 05:36:42.779097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.323 [2024-12-15 05:36:42.779106] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.323 [2024-12-15 05:36:42.779280] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.323 [2024-12-15 05:36:42.779467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.323 [2024-12-15 05:36:42.779476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.323 [2024-12-15 05:36:42.779483] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.323 [2024-12-15 05:36:42.779490] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.323 [2024-12-15 05:36:42.791714] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.323 [2024-12-15 05:36:42.792172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.323 [2024-12-15 05:36:42.792193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.323 [2024-12-15 05:36:42.792201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.323 [2024-12-15 05:36:42.792374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.323 [2024-12-15 05:36:42.792548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.323 [2024-12-15 05:36:42.792557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.323 [2024-12-15 05:36:42.792564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.323 [2024-12-15 05:36:42.792571] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.323 [2024-12-15 05:36:42.804774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.323 [2024-12-15 05:36:42.805211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.323 [2024-12-15 05:36:42.805232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.323 [2024-12-15 05:36:42.805241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.323 [2024-12-15 05:36:42.805417] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.323 [2024-12-15 05:36:42.805592] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.323 [2024-12-15 05:36:42.805600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.323 [2024-12-15 05:36:42.805608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.323 [2024-12-15 05:36:42.805615] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.323 [2024-12-15 05:36:42.817824] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.323 [2024-12-15 05:36:42.818245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.323 [2024-12-15 05:36:42.818264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.323 [2024-12-15 05:36:42.818272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.323 [2024-12-15 05:36:42.818446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.323 [2024-12-15 05:36:42.818619] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.323 [2024-12-15 05:36:42.818628] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.323 [2024-12-15 05:36:42.818635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.323 [2024-12-15 05:36:42.818641] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.323 [2024-12-15 05:36:42.830835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.323 [2024-12-15 05:36:42.831263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.323 [2024-12-15 05:36:42.831281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.323 [2024-12-15 05:36:42.831288] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.323 [2024-12-15 05:36:42.831466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.323 [2024-12-15 05:36:42.831645] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.323 [2024-12-15 05:36:42.831654] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.323 [2024-12-15 05:36:42.831661] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.323 [2024-12-15 05:36:42.831667] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.323 [2024-12-15 05:36:42.843954] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.323 [2024-12-15 05:36:42.844394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.323 [2024-12-15 05:36:42.844412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.323 [2024-12-15 05:36:42.844419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.323 [2024-12-15 05:36:42.844593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.323 [2024-12-15 05:36:42.844766] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.323 [2024-12-15 05:36:42.844775] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.323 [2024-12-15 05:36:42.844781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.323 [2024-12-15 05:36:42.844788] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.323 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:29.323 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:36:29.323 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:29.323 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:29.323 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.323 [2024-12-15 05:36:42.857005] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.323 [2024-12-15 05:36:42.857342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.323 [2024-12-15 05:36:42.857359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.323 [2024-12-15 05:36:42.857368] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.324 [2024-12-15 05:36:42.857543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.324 [2024-12-15 05:36:42.857719] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.324 [2024-12-15 05:36:42.857729] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.324 [2024-12-15 05:36:42.857736] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.324 [2024-12-15 05:36:42.857744] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.324 [2024-12-15 05:36:42.870115] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.324 [2024-12-15 05:36:42.870401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.324 [2024-12-15 05:36:42.870419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.324 [2024-12-15 05:36:42.870431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.324 [2024-12-15 05:36:42.870604] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.324 [2024-12-15 05:36:42.870784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.324 [2024-12-15 05:36:42.870793] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.324 [2024-12-15 05:36:42.870799] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.324 [2024-12-15 05:36:42.870805] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.324 [2024-12-15 05:36:42.883267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.324 [2024-12-15 05:36:42.883565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.324 [2024-12-15 05:36:42.883583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.324 [2024-12-15 05:36:42.883591] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.324 [2024-12-15 05:36:42.883775] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.324 [2024-12-15 05:36:42.883962] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.324 [2024-12-15 05:36:42.883971] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.324 [2024-12-15 05:36:42.883979] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.324 [2024-12-15 05:36:42.883986] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.324 5149.67 IOPS, 20.12 MiB/s [2024-12-15T04:36:43.011Z] [2024-12-15 05:36:42.895149] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:29.324 [2024-12-15 05:36:42.896569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.324 [2024-12-15 05:36:42.896947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.324 [2024-12-15 05:36:42.896965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.324 [2024-12-15 05:36:42.896973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.324 [2024-12-15 05:36:42.897152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.324 [2024-12-15 05:36:42.897327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.324 [2024-12-15 05:36:42.897336] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.324 [2024-12-15 05:36:42.897343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.324 [2024-12-15 05:36:42.897350] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.324 [2024-12-15 05:36:42.909566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.324 [2024-12-15 05:36:42.909947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.324 [2024-12-15 05:36:42.909965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.324 [2024-12-15 05:36:42.909973] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.324 [2024-12-15 05:36:42.910152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.324 [2024-12-15 05:36:42.910325] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.324 [2024-12-15 05:36:42.910334] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.324 [2024-12-15 05:36:42.910341] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.324 [2024-12-15 05:36:42.910347] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.324 [2024-12-15 05:36:42.922566] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.324 [2024-12-15 05:36:42.922916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.324 [2024-12-15 05:36:42.922934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.324 [2024-12-15 05:36:42.922941] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.324 [2024-12-15 05:36:42.923118] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.324 [2024-12-15 05:36:42.923293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.324 [2024-12-15 05:36:42.923301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.324 [2024-12-15 05:36:42.923308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.324 [2024-12-15 05:36:42.923314] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.324 Malloc0 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.324 [2024-12-15 05:36:42.935689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.324 [2024-12-15 05:36:42.936019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.324 [2024-12-15 05:36:42.936037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.324 [2024-12-15 05:36:42.936045] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.324 [2024-12-15 05:36:42.936219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.324 [2024-12-15 05:36:42.936392] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.324 [2024-12-15 05:36:42.936405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.324 [2024-12-15 05:36:42.936411] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.324 [2024-12-15 05:36:42.936418] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.324 [2024-12-15 05:36:42.948808] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.324 [2024-12-15 05:36:42.949152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:29.324 [2024-12-15 05:36:42.949169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2069cf0 with addr=10.0.0.2, port=4420 00:36:29.324 [2024-12-15 05:36:42.949177] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2069cf0 is same with the state(6) to be set 00:36:29.324 [2024-12-15 05:36:42.949350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2069cf0 (9): Bad file descriptor 00:36:29.324 [2024-12-15 05:36:42.949524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:36:29.324 [2024-12-15 05:36:42.949532] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:36:29.324 [2024-12-15 05:36:42.949539] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:36:29.324 [2024-12-15 05:36:42.949545] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:29.324 [2024-12-15 05:36:42.953976] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:29.324 05:36:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 536388 00:36:29.324 [2024-12-15 05:36:42.961783] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:36:29.324 [2024-12-15 05:36:42.990724] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:36:31.640 5896.14 IOPS, 23.03 MiB/s [2024-12-15T04:36:46.263Z] 6595.50 IOPS, 25.76 MiB/s [2024-12-15T04:36:47.200Z] 7138.33 IOPS, 27.88 MiB/s [2024-12-15T04:36:48.137Z] 7573.90 IOPS, 29.59 MiB/s [2024-12-15T04:36:49.074Z] 7945.82 IOPS, 31.04 MiB/s [2024-12-15T04:36:50.010Z] 8243.17 IOPS, 32.20 MiB/s [2024-12-15T04:36:50.946Z] 8491.46 IOPS, 33.17 MiB/s [2024-12-15T04:36:52.323Z] 8714.07 IOPS, 34.04 MiB/s [2024-12-15T04:36:52.323Z] 8904.73 IOPS, 34.78 MiB/s 00:36:38.636 Latency(us) 00:36:38.636 [2024-12-15T04:36:52.323Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:38.636 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:36:38.636 Verification LBA range: start 0x0 length 0x4000 00:36:38.636 Nvme1n1 : 15.01 8903.42 34.78 11022.64 0.00 6403.59 592.94 23842.62 00:36:38.636 [2024-12-15T04:36:52.323Z] =================================================================================================================== 00:36:38.636 [2024-12-15T04:36:52.323Z] Total : 8903.42 34.78 11022.64 0.00 6403.59 592.94 23842.62 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # sync 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set +e 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:38.636 rmmod nvme_tcp 00:36:38.636 rmmod nvme_fabrics 00:36:38.636 rmmod nvme_keyring 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@128 -- # set -e 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # return 0 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@517 -- # '[' -n 537287 ']' 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@518 -- # killprocess 537287 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 537287 ']' 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 537287 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 537287 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 537287' 00:36:38.636 killing process with pid 537287 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 537287 00:36:38.636 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 537287 00:36:38.896 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:38.896 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:38.896 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:38.896 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # iptr 00:36:38.896 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-save 00:36:38.896 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:38.896 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@791 -- # iptables-restore 00:36:38.896 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:38.896 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:38.896 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:38.896 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:38.896 05:36:52 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:40.800 05:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:36:40.800 00:36:40.800 real 0m25.892s 00:36:40.800 user 1m0.535s 00:36:40.800 sys 0m6.682s 00:36:40.800 05:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:40.800 05:36:54 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:36:40.800 ************************************ 00:36:40.800 END TEST nvmf_bdevperf 00:36:40.800 ************************************ 00:36:41.059 05:36:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:41.059 05:36:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:36:41.059 05:36:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:41.059 05:36:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:36:41.059 ************************************ 00:36:41.059 START TEST nvmf_target_disconnect 00:36:41.059 ************************************ 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:36:41.060 * Looking for test storage... 00:36:41.060 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lcov --version 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:36:41.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.060 --rc genhtml_branch_coverage=1 00:36:41.060 --rc genhtml_function_coverage=1 00:36:41.060 --rc genhtml_legend=1 00:36:41.060 --rc geninfo_all_blocks=1 00:36:41.060 --rc geninfo_unexecuted_blocks=1 00:36:41.060 00:36:41.060 ' 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:36:41.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.060 --rc genhtml_branch_coverage=1 00:36:41.060 --rc genhtml_function_coverage=1 00:36:41.060 --rc genhtml_legend=1 00:36:41.060 --rc geninfo_all_blocks=1 00:36:41.060 --rc geninfo_unexecuted_blocks=1 00:36:41.060 00:36:41.060 ' 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:36:41.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.060 --rc genhtml_branch_coverage=1 00:36:41.060 --rc genhtml_function_coverage=1 00:36:41.060 --rc genhtml_legend=1 00:36:41.060 --rc geninfo_all_blocks=1 00:36:41.060 --rc geninfo_unexecuted_blocks=1 00:36:41.060 00:36:41.060 ' 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:36:41.060 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:36:41.060 --rc genhtml_branch_coverage=1 00:36:41.060 --rc genhtml_function_coverage=1 00:36:41.060 --rc genhtml_legend=1 00:36:41.060 --rc geninfo_all_blocks=1 00:36:41.060 --rc geninfo_unexecuted_blocks=1 00:36:41.060 00:36:41.060 ' 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # : 0 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:36:41.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:36:41.060 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@55 -- # have_pci_nics=0 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@476 -- # prepare_net_devs 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@438 -- # local -g is_hw=no 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@440 -- # remove_spdk_ns 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # xtrace_disable 00:36:41.319 05:36:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # pci_devs=() 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # local -a pci_devs 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # pci_net_devs=() 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # pci_drivers=() 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # local -A pci_drivers 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # net_devs=() 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@319 -- # local -ga net_devs 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # e810=() 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # local -ga e810 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # x722=() 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # local -ga x722 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # mlx=() 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@322 -- # local -ga mlx 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:36:46.671 Found 0000:af:00.0 (0x8086 - 0x159b) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:36:46.671 Found 0000:af:00.1 (0x8086 - 0x159b) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:36:46.671 Found net devices under 0000:af:00.0: cvl_0_0 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # [[ up == up ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:36:46.671 Found net devices under 0000:af:00.1: cvl_0_1 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@442 -- # is_hw=yes 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:36:46.671 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:36:46.949 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:36:46.949 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.359 ms 00:36:46.949 00:36:46.949 --- 10.0.0.2 ping statistics --- 00:36:46.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.949 rtt min/avg/max/mdev = 0.359/0.359/0.359/0.000 ms 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:36:46.949 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:36:46.949 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.187 ms 00:36:46.949 00:36:46.949 --- 10.0.0.1 ping statistics --- 00:36:46.949 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:36:46.949 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # return 0 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:46.949 ************************************ 00:36:46.949 START TEST nvmf_target_disconnect_tc1 00:36:46.949 ************************************ 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:46.949 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:46.950 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:46.950 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:46.950 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:36:46.950 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:36:46.950 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:47.245 [2024-12-15 05:37:00.723002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:47.245 [2024-12-15 05:37:00.723051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x9f8590 with addr=10.0.0.2, port=4420 00:36:47.245 [2024-12-15 05:37:00.723072] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:36:47.245 [2024-12-15 05:37:00.723084] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:36:47.245 [2024-12-15 05:37:00.723090] nvme.c: 951:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:36:47.245 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:36:47.245 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:36:47.245 Initializing NVMe Controllers 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:47.245 00:36:47.245 real 0m0.118s 00:36:47.245 user 0m0.044s 00:36:47.245 sys 0m0.073s 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:36:47.245 ************************************ 00:36:47.245 END TEST nvmf_target_disconnect_tc1 00:36:47.245 ************************************ 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:36:47.245 ************************************ 00:36:47.245 START TEST nvmf_target_disconnect_tc2 00:36:47.245 ************************************ 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=542351 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 542351 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 542351 ']' 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:47.245 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:47.245 05:37:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:47.245 [2024-12-15 05:37:00.862494] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:47.245 [2024-12-15 05:37:00.862537] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:47.554 [2024-12-15 05:37:00.938925] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:47.554 [2024-12-15 05:37:00.961355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:47.554 [2024-12-15 05:37:00.961395] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:47.554 [2024-12-15 05:37:00.961401] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:47.554 [2024-12-15 05:37:00.961407] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:47.554 [2024-12-15 05:37:00.961412] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:47.554 [2024-12-15 05:37:00.962785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:47.554 [2024-12-15 05:37:00.962894] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:47.554 [2024-12-15 05:37:00.962979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:47.554 [2024-12-15 05:37:00.962979] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:47.554 Malloc0 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:47.554 [2024-12-15 05:37:01.135505] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:47.554 [2024-12-15 05:37:01.164494] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=542377 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:36:47.554 05:37:01 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:36:49.621 05:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 542351 00:36:49.621 05:37:03 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Write completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Write completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Write completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Write completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Write completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Write completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Write completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Write completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Write completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.621 [2024-12-15 05:37:03.196645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:36:49.621 starting I/O failed 00:36:49.621 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 [2024-12-15 05:37:03.196836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Write completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 Read completed with error (sct=0, sc=8) 00:36:49.622 starting I/O failed 00:36:49.622 [2024-12-15 05:37:03.197032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:36:49.622 [2024-12-15 05:37:03.197130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.197149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.197287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.197297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.197390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.197399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.197483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.197498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.197695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.197705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.197836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.197845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.197981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.197995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.198165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.198179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.198266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.198275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.198421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.198431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.198489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.198499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.198652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.198662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.198745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.198755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.198842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.622 [2024-12-15 05:37:03.198851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.622 qpair failed and we were unable to recover it. 00:36:49.622 [2024-12-15 05:37:03.199039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.199053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.199150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.199161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.199243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.199253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.199322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.199331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.199410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.199421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.199475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.199485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.199559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.199568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.199654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.199675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.199817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.199828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.199913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.199923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.200005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.200015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.200081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.200091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.200166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.200175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.200253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.200262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.200349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.200359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.200493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.200503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.200582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.200591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.200735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.200744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.200813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.200823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.200952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.200962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.201024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.201037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.201168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.201178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.201239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.201248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.201376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.201386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.201457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.201467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.201540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.201549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.201612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.201622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.201692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.201701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.201836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.201845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.201984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.202000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.202127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.202137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.202208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.202217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.202357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.202367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.202450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.202460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.623 [2024-12-15 05:37:03.202538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.623 [2024-12-15 05:37:03.202548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.623 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.202613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.202623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.202697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.202706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.202832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.202842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.202919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.202928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.203016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.203026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.203163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.203172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.203231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.203241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.203364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.203374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.203459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.203468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.203600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.203609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.203671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.203681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.203806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.203815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Read completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 Write completed with error (sct=0, sc=8) 00:36:49.624 starting I/O failed 00:36:49.624 [2024-12-15 05:37:03.204015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:49.624 [2024-12-15 05:37:03.204109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.204132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.204283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.204295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.204355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.204364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.204515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.204525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.204609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.204619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.204686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.204695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.204773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.204783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.204945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.204956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.205111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.205122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.205189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.205198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.205267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.205276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.205346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.205355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.205413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.205422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.205542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.205552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.205618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.624 [2024-12-15 05:37:03.205626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.624 qpair failed and we were unable to recover it. 00:36:49.624 [2024-12-15 05:37:03.205817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.205826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.205902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.205911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.205972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.205981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.206047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.206057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.206135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.206144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.206215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.206225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.206281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.206290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.206355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.206364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.206425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.206434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.206574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.206583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.206646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.206655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.206722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.206731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.206786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.206795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.206887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.206896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.206953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.206962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.207091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.207101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.207267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.207276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.207334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.207343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.207411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.207421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.207500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.207509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.207574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.207583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.207671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.207680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.207735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.207744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.207824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.207834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.207963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.207973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.208041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.208051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.208114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.208123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.208186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.208195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.208257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.208266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.208346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.208355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.208413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.208422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.208475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.625 [2024-12-15 05:37:03.208484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.625 qpair failed and we were unable to recover it. 00:36:49.625 [2024-12-15 05:37:03.208552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.208561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.208635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.208643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.208704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.208713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.208839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.208848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.208925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.208936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.209130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.209143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.209216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.209229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.209293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.209305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.209371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.209383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.209447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.209459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.209528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.209540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.209607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.209619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.209689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.209701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.209783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.209795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.209878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.209890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.209952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.209965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.210032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.210044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.210136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.210148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.210223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.210235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.210308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.210320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.210400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.210412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.210544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.210556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.210629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.210642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.210712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.210725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.210787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.210799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.210925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.210938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.211002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.211018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.211100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.211113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.211313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.626 [2024-12-15 05:37:03.211325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.626 qpair failed and we were unable to recover it. 00:36:49.626 [2024-12-15 05:37:03.211404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.211417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.211488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.211500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.211574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.211586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.211676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.211690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.211749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.211761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.211892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.211905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.211980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.211997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.212076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.212089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.212166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.212179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.212252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.212265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.212341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.212354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.212492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.212505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.212589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.212602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.212674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.212687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.212817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.212830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.212965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.212978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.213046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.213059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.213138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.213150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.213221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.213234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.213378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.213390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.213459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.213472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.213572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.213585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.213655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.213667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.213814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.213827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.213914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.213927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.213997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.214011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.214077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.214090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.214157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.214169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.214248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.214261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.214340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.214353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.214448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.214461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.214592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.214605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.214669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.214682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.214738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.214750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.214816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.214829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.214921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.214934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.214999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.215012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.215144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.215159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.215220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.627 [2024-12-15 05:37:03.215232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.627 qpair failed and we were unable to recover it. 00:36:49.627 [2024-12-15 05:37:03.215301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.215314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.215373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.215385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.215458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.215472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.215532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.215544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.215620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.215633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.215694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.215707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.215779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.215791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.215852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.215864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.215931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.215944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.216015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.216028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.216125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.216138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.216220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.216233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.216312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.216325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.216465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.216478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.216556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.216569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.216631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.216645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.216716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.216728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.216797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.216811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.216882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.216894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.216969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.216982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.217057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.217070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.217148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.217161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.217229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.217242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.217306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.217318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.217398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.217411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.217497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.217510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.217592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.217605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.217668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.217681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.217751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.217764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.217831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.217844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.217909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.217922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.218004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.218018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.218151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.218164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.218317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.218330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.218476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.218489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.218570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.218583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.218648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.218660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.218743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.218756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.628 [2024-12-15 05:37:03.218830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.628 [2024-12-15 05:37:03.218845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.628 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.218920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.218933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.218999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.219011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.219150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.219164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.219246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.219259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.219399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.219416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.219492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.219509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.219607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.219624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.219698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.219715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.219793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.219810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.219888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.219906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.220049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.220068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.220139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.220156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.220232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.220249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.220343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.220360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.220465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.220497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.220712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.220744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.220923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.220954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.221092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.221125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.221228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.221260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.221429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.221460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.221630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.221662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.221832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.221864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.222053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.222086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.222198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.222230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.222498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.222529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.222699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.222731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.222912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.222930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.223032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.223050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.223136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.223153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.223305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.223322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.223406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.223422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.223504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.223521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.223677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.223694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.223846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.223877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.224057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.224091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.224217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.629 [2024-12-15 05:37:03.224249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.629 qpair failed and we were unable to recover it. 00:36:49.629 [2024-12-15 05:37:03.224362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.224393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.224486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.224503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.224658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.224675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.224756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.224775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.224855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.224872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.224977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.225023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.225202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.225234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.225406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.225438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.225610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.225627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.225702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.225719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.225855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.225872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.225949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.225966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.226113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.226131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.226231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.226248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.226324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.226341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.226430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.226447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.226528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.226573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.226700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.226732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.226869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.226901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.227025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.227058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.227229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.227261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.227362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.227393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.227493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.227525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.227650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.227681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.227913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.227944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.228079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.228111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.228288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.228319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.228432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.228463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.228578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.228594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.228682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.228699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.228885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.228902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.229141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.229159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.229322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.229353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.229524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.229555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.229742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.229774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.229954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.229985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.230108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.230141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.230378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.230411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.630 [2024-12-15 05:37:03.230600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.630 [2024-12-15 05:37:03.230630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.630 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.230811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.230842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.230944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.230976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.231297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.231330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.231512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.231560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.231670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.231708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.231824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.231855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.232114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.232148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.232269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.232301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.232415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.232446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.232621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.232653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.232821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.232852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.233088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.233122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.233312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.233343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.233535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.233566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.233743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.233775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.234003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.234035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.234203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.234234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.234405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.234436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.234557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.234590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.234769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.234800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.234901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.234932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.235121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.235154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.235273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.235305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.235490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.235521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.235703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.235734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.235977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.236019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.236129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.236159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.236296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.236329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.236438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.236474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.236703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.236734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.236852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.631 [2024-12-15 05:37:03.236883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.631 qpair failed and we were unable to recover it. 00:36:49.631 [2024-12-15 05:37:03.237018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.237052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.237173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.237204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.237395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.237427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.237540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.237572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.237682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.237714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.237882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.237913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.238108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.238141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.238242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.238273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.238457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.238488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.238610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.238641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.238750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.238782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.238920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.238951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.239162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.239194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.239459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.239497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.239671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.239703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.239886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.239917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.240121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.240154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.240322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.240354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.240465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.240496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.240666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.240698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.240827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.240859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.240965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.241007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.241196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.241229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.241408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.241440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.241555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.241586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.241794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.241826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.241958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.241990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.242133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.242165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.242279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.242310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.242419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.242451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.242557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.242588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.242695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.242726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.242914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.242946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.243104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.243136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.243314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.243345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.243519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.243551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.243722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.632 [2024-12-15 05:37:03.243753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.632 qpair failed and we were unable to recover it. 00:36:49.632 [2024-12-15 05:37:03.243867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.243898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.244012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.244045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.244147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.244178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.244291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.244324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.244497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.244529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.244712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.244743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.244922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.244952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.245144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.245178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.245293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.245325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.245438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.245470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.245665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.245697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.245821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.245851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.246069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.246102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.246301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.246334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.246523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.246555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.246675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.246707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.246901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.246939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.247076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.247110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.247233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.247265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.247371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.247403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.247524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.247556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.247677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.247709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.247820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.247853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.247988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.248032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.248133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.248164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.248343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.248375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.248559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.248591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.248700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.248732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.248839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.248872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.248978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.249018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.249143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.249175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.249302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.249334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.249440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.249471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.249573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.249604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.249717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.249748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.249925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.249957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.633 [2024-12-15 05:37:03.250164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.633 [2024-12-15 05:37:03.250196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.633 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.250317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.250349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.250466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.250497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.250621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.250653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.250863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.250895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.251008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.251040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.251210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.251242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.251491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.251563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.251779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.251814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.251939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.251972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.252110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.252144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.252328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.252360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.252599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.252631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.252747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.252779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.252978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.253019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.253208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.253240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.253415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.253448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.253554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.253586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.253764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.253795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.253923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.253955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.254149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.254181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.254314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.254347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.254518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.254550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.254672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.254703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.254830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.254862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.255127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.255161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.255267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.255298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.255536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.255567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.255756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.255787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.256046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.256079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.256205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.256238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.256366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.256398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.256522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.256554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.256669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.256701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.256829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.256868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.256985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.257025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.257131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.634 [2024-12-15 05:37:03.257163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.634 qpair failed and we were unable to recover it. 00:36:49.634 [2024-12-15 05:37:03.257336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.257369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.257486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.257517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.257693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.257725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.257839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.257871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.257982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.258024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.258138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.258170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.258276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.258308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.258481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.258513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.258619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.258651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.258776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.258808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.259047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.259080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.259205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.259237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.259371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.259403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.259536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.259568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.259677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.259707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.259819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.259850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.260032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.260065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.260250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.260281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.260406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.260438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.260561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.260593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.260712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.260743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.260857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.260889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.261063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.261096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.261281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.261312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.261496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.261533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.261652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.261684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.261863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.261893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.262014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.262047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.262226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.262259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.262438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.262470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.262586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.262618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.262729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.262761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.262947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.262978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.263164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.263197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.263309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.263342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.635 qpair failed and we were unable to recover it. 00:36:49.635 [2024-12-15 05:37:03.263459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.635 [2024-12-15 05:37:03.263490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.263729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.263761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.263879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.263910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.264042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.264078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.264250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.264283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.264397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.264429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.264538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.264570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.264670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.264702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.264835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.264867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.265047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.265081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.265194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.265227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.265422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.265453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.265573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.265604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.265730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.265762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.265941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.265974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.266095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.266128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.266258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.266296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.266423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.266455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.266561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.266594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.266723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.266755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.266930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.266962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.267209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.267243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.267441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.267474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.267591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.267624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.267845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.267878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.268086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.268120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.268250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.268282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.268508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.268540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.268645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.268677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.268922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.268955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.269150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.269233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.269374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.269411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.269537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.269570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.269748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.269779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.269908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.636 [2024-12-15 05:37:03.269941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.636 qpair failed and we were unable to recover it. 00:36:49.636 [2024-12-15 05:37:03.270070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.270105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.270238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.270268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.270446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.270478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.270677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.270710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.270842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.270872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.271051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.271085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.271203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.271236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.271471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.271503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.271774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.271814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.271941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.271973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.272101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.272133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.272362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.272394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.272513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.272545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.272672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.272704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.272882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.272914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.273030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.273063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.273231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.273263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.273461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.273500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.273625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.273657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.273779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.273811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.273923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.273955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.274096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.274129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.274342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.274374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.274551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.274583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.274696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.274728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.274851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.274883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.275015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.275049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.275184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.275216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.275332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.275364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.275602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.275635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.275757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.275789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.275909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.275941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.276058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.276092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.276346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.276377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.276548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.276580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.276791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.276824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.276961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.277003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.277193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.277226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.637 [2024-12-15 05:37:03.277337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.637 [2024-12-15 05:37:03.277369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.637 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.278838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.278893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.279203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.279239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.279365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.279398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.279532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.279565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.279746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.279777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.279904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.279936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.280129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.280162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.280284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.280315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.280450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.280482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.280722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.280761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.280949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.280981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.281163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.281194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.281434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.281466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.281572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.281604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.281784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.281816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.281988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.282029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.282155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.282187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.282302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.282334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.282459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.282490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.282691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.282723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.282911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.282943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.283067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.283100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.283215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.283246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.283377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.283409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.283579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.283610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.283733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.283765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.283888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.283920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.284043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.284076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.284223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.284255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.284425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.284456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.284575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.284606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.284711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.284743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.284848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.284880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.285053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.285086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.285281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.285316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.285575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.285605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.285843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.285918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.286130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.638 [2024-12-15 05:37:03.286167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.638 qpair failed and we were unable to recover it. 00:36:49.638 [2024-12-15 05:37:03.286358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-12-15 05:37:03.286392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-12-15 05:37:03.286508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-12-15 05:37:03.286540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-12-15 05:37:03.286739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-12-15 05:37:03.286771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-12-15 05:37:03.286887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-12-15 05:37:03.286920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-12-15 05:37:03.287085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-12-15 05:37:03.287118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-12-15 05:37:03.287326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-12-15 05:37:03.287359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.639 [2024-12-15 05:37:03.287571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.639 [2024-12-15 05:37:03.287603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.639 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.287772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.287806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.287930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.287963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.288091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.288127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.288321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.288353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.288472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.288513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.288644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.288677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.288848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.288880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.289007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.289041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.289146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.289179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.289353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.289385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.289578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.289610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.289719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.289752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.289923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.289955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.290097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.290131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.290257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.290289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.290504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.290536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.290742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.290774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.290890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.290923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.291207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.291241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.291508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.291541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.291763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.291795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.292029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.292063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.292188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.292221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.292393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.292426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.292623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.292656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.292827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.292859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.292983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.293029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.293202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.293234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.293497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.293530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.293656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.293687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.293924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.293955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.294168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.294207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.294465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.294498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.294702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.294735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.294925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.294960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.295219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.295254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.295438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.295470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.295684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.295716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.295842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.295874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.296087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.296121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.296247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.296280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.296412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.296444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.296685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.296717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.296903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.296936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.297123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.297161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.297286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.297318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.297444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.297477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.297761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.297794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.298013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.298046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.298171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.298204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.298400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.298432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.298628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.298661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.298897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.298929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.299173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.299206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.299396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.299428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.299664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.299696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.299810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.299842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.300034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.300068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.300181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.300214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.300386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.300419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.300542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.300575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.300745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.300776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.300900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.300933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.301080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.301115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.301247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.301279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.301468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.301503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.301788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.301820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.302086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.302120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.302360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.302393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.302635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.302667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.302927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.302959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.303146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.303182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.303319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.936 [2024-12-15 05:37:03.303352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.936 qpair failed and we were unable to recover it. 00:36:49.936 [2024-12-15 05:37:03.303563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.303597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.303861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.303895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.304132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.304166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.304295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.304328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.304447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.304480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.304753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.304784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.304971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.305012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.305139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.305172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.305361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.305393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.305631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.305663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.305875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.305907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.306044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.306084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.306279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.306311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.306496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.306528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.306716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.306748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.307024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.307058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.307313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.307345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.307500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.307531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.307770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.307802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.308039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.308073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.308326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.308358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.308540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.308572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.308741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.308773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.309035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.309068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.309188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.309221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.309449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.309481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.309793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.309826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.310116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.310150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.310281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.310313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.310520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.310553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.310739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.310771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.310956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.310988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.311265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.311299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.311486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.311519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.311695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.311728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.311840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.311872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.312136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.312171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.312305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.312339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.312573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.312645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.312903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.312939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.313169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.313205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.313471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.313504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.313702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.313735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.313924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.313957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.314165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.314199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.314439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.314471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.314715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.314760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.314971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.315015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.315253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.315285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.315547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.315580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.315817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.315849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.316107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.316150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.316389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.316422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.316684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.316716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.317007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.317040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.317291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.317323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.317623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.317656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.317916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.317947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.318226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.318260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.318367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.318399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.318665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.318698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.318985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.319028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.319166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.319198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.319460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.319492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.319757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.319790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.319985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.320030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.320269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.320302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.320479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.320511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.320717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.320750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.321017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.321051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.321339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.321371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.321628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.321660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.321922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.321954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.322243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.322276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.322456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.322488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.322684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.322717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.322978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.323019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.323231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.323262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.323518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.323591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.323757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.323794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.324059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.324098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.324320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.324354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.937 [2024-12-15 05:37:03.324559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.937 [2024-12-15 05:37:03.324591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.937 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.324853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.324886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.325258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.325295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.325396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.325429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.325612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.325646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.325780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.325814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.326093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.326129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.326367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.326400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.326535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.326569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.326703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.326737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.326890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.326921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.327027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.327062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.327191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.327224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.327409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.327442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.327631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.327663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.327857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.327890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.328045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.328081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.328221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.328254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.328378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.328410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.328540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.328571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.328692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.328726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.328930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.328962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.329155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.329189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.329366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.329406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.329597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.329632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.329806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.329839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.329968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.330013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.330230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.330264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.330445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.330478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.330660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.330693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.330827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.330857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.331075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.331109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.331215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.331247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.331378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.331411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.331592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.331624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.331886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.331918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.332157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.332191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.332338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.332370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.332549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.332582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.332753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.332785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.332965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.333010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.333188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.333220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.333339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.333372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.333499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.333533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.333791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.333824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.334034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.334067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.334273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.334306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.334416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.334449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.334582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.334615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.334743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.334777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.334958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.335006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.335125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.335159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.335278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.335312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.335570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.335602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.335844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.335879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.335982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.336026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.336155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.336188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.336318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.336351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.336535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.336568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.336750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.336783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.337004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.337037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.337151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.337183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.337289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.337320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.337439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.337473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.337594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.337626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.337869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.337902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.338109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.338144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.338328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.338360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.338626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.338659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.338786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.338819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.338920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.338951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.339130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.339164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.339287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.339320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.339502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.339534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.938 [2024-12-15 05:37:03.339660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.938 [2024-12-15 05:37:03.339692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.938 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.339871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.339904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.340034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.340069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.340338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.340370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.340618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.340651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.340822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.340855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.341046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.341078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.341286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.341318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.341449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.341482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.341586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.341617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.341727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.341759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.341872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.341904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.342044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.342077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.342268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.342300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.342481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.342513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.342778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.342811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.343006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.343040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.343156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.343189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.343317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.343348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.343474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.343507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.343693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.343725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.343850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.343881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.344073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.344108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.344313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.344344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.344518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.344550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.344663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.344697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.344883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.344914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.345103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.345137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.345438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.345471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.345591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.345623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.345829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.345862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.346008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.346041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.346230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.346263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.346380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.346411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.346526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.346559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.346807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.346839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.346960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.346999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.347237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.347288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.347467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.347499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.347675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.347706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.347888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.347921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.348091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.348124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.348302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.348334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.348517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.348549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.348660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.348699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.348815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.348849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.349116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.349149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.349280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.349313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.349417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.349450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.349550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.349582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.349777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.349809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.349941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.349973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.350244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.350277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.350458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.350491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.350731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.350764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.350949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.350982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.351186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.351219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.351485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.351518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.351719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.351751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.351960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.352000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.352279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.352311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.352505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.352537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.352760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.352793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.353057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.353090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.353227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.353259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.353374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.353406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.353537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.353569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.353702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.353734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.353944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.353978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.354098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.354130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.354363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.354396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.354577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.354617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.354800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.354832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.355012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.355046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.355160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.355193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.355402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.355434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.355555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.355587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.355711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.355744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.355934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.355965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.356109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.356143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.356247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.356287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.356466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.939 [2024-12-15 05:37:03.356498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.939 qpair failed and we were unable to recover it. 00:36:49.939 [2024-12-15 05:37:03.356668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.356701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.356817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.356848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.357025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.357059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.357327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.357360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.357599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.357631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.357898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.357931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.358143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.358177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.358345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.358377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.358553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.358585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.358698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.358731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.358920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.358952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.359072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.359105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.359319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.359352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.359538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.359570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.359705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.359737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.359849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.359882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.359983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.360050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.360248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.360282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.360543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.360575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.360747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.360779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.361014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.361048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.361222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.361254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.361540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.361572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.361760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.361794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.362044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.362078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.362284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.362316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.362562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.362595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.362858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.362889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.363027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.363060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.363268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.363300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.363511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.363544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.363748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.363781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.364020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.364053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.364365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.364398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.364570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.364603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.364786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.364820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.365065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.365098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.365239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.365272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.365448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.365479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.365586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.365617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.365753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.365786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.365897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.365928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.366165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.366199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.366370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.366401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.366536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.366569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.366814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.366847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.366971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.367011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.367192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.367225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.367483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.367516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.367629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.367663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.367851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.367883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.368084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.368118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.368356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.368389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.368575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.368608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.368793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.368825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.368936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.368968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.369151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.369184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.369398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.369432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.369691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.369724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.369918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.369950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.370210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.370243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.370365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.370397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.370575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.370607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.370808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.370840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.371102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.371138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.371262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.371293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.371500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.371532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.371725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.371758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.371952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.371983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.372207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.372244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.940 [2024-12-15 05:37:03.372420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.940 [2024-12-15 05:37:03.372453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.940 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.372713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.372745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.372929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.372962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.373148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.373182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.373445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.373478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.373679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.373712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.373904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.373936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.374225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.374259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.374495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.374528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.374660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.374692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.374952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.374985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.375187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.375220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.375411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.375444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.375727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.375760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.375943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.375982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.376186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.376219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.376453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.376485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.376665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.376697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.376826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.376857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.377027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.377061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.377307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.377340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.377464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.377496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.377767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.377799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.378082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.378116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.378394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.378427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.378629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.378661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.378955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.378987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.379131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.379165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.379442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.379473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.379744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.379776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.380058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.380092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.380299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.380331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.380574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.380608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.380874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.380907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.381132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.381165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.381349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.381381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.381552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.381584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.381781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.381813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.381998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.382032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.382230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.382261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.382431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.382463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.382678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.382715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.382973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.383012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.383203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.383236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.383426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.383459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.383642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.383673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.383852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.383890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.384138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.384172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.384308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.384340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.384544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.384576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.384714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.384745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.384924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.384957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.385175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.385210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.385398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.385429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.385681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.385714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.385982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.386025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.386279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.386310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.386500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.386534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.386715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.386748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.386946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.386978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.387180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.387213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.387403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.387436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.387699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.387731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.387989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.388031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.388318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.388351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.388469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.388500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.388735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.388766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.388974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.389017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.389225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.389257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.389473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.389506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.389761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.389793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.390098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.390133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.390380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.390413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.390559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.390591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.390770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.390802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.391064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.391098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.391218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.391248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.391503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.391535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.391819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.391852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.392128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.392164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.941 [2024-12-15 05:37:03.392291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.941 [2024-12-15 05:37:03.392324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.941 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.392491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.392523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.392744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.392777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.393037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.393071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.393284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.393316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.393558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.393591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.393857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.393891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.394066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.394099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.394337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.394370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.394661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.394694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.394908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.394941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.395181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.395216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.395412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.395444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.395696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.395729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.395922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.395955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.396235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.396269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.396618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.396653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.396878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.396912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.397166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.397201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.397415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.397448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.397658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.397690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.397976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.398017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.398256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.398288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.398496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.398529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.398656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.398687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.398909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.398942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.399144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.399177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.399388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.399420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.399595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.399628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.399827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.399864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.400047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.400080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.400351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.400384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.400645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.400677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.400865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.400898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.401082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.401116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.401307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.401338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.401601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.401633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.401834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.401867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.402114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.402147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.402349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.402381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.402558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.402591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.402767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.402798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.403055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.403088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.403306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.403340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.403595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.403629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.403761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.403794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.404057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.404090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.404347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.404379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.404682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.404714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.404974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.405018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.405208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.405240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.405472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.405504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.405683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.405715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.405895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.405928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.406124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.406158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.406291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.406324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.406517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.406555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.406792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.406825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.407006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.407039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.407163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.407196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.407483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.407517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.407734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.407767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.407880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.407913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.408150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.408184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.408471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.408505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.408766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.408803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.409087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.409119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.409301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.409333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.409526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.409559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.409829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.409861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.410140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.410174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.410385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.410418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.942 [2024-12-15 05:37:03.410657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.942 [2024-12-15 05:37:03.410688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.942 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.410950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.410983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.411218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.411253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.411460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.411492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.411704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.411737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.411983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.412036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.412163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.412195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.412403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.412436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.412682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.412713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.413007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.413041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.413230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.413263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.413381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.413418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.413667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.413700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.413936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.413969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.414187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.414220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.414352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.414384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.414685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.414718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.415002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.415036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.415289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.415322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.415524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.415558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.415736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.415768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.416025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.416058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.416245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.416279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.416429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.416461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.416677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.416710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.417011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.417046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.417306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.417340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.417558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.417590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.417774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.417806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.417915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.417948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.418161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.418194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.418389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.418421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.418568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.418600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.418859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.418891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.419068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.419101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.419291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.419323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.419565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.419597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.419857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.419888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.420133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.420168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.420415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.420447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.420635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.420667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.420838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.420870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.421072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.421105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.421346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.421378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.421481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.421513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.421733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.421766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.421975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.422014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.422229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.422261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.422448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.422480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.422587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.422620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.422818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.422850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.423025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.423058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.423256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.423289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.423425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.423457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.423645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.423677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.423925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.423956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.424206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.424240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.424446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.424478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.424711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.424743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.424981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.425028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.425289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.425321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.425462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.425493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.425755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.425788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.426053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.426088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.426281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.426313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.426515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.426548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.426745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.426777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.427021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.427055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.427245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.427277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.427450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.427482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.427725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.427757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.427899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.427931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.428203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.428237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.428475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.428507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.428707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.428739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.428924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.428956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.429141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.429175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.429413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.429444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.429625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.429657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.429949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.429987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.430253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.430286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.943 [2024-12-15 05:37:03.430477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.943 [2024-12-15 05:37:03.430508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.943 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.430776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.430808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.431074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.431107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.431398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.431430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.431630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.431661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.431909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.431941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.432157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.432191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.432446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.432478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.432603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.432633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.432872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.432903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.433186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.433221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.433391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.433423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.433698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.433730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.433920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.433951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.434211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.434246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.434536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.434568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.434776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.434807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.435098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.435131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.435257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.435289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.435501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.435531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.435722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.435754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.436004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.436038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.436304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.436335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.436578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.436610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.436717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.436749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.436943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.436980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.437177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.437210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.437466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.437498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.437786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.437834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.438053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.438086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.438298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.438330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.438467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.438499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.438765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.438798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.439052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.439087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.439374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.439407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.439532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.439563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.439747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.439780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.440025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.440059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.440257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.440289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.440495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.440528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.440731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.440764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.441026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.441059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.441201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.441232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.441494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.441526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.441814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.441845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.442040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.442073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.442265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.442297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.442436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.442467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.442741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.442773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.443046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.443080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.443349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.443380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.443584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.443615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.443802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.443835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.444106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.444139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.444402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.444434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.444629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.444661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.444935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.444967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.445128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.445163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.445426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.445457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.445630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.445663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.445927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.445960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.446146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.446180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.446442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.446474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.446715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.446748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.446959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.446991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.447190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.447223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.447426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.447479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.447775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.447806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.448054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.448089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.448326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.448361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.448655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.448685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.448950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.448982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.449233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.449266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.449452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.449484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.449657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.449688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.449939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.449970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.944 qpair failed and we were unable to recover it. 00:36:49.944 [2024-12-15 05:37:03.450223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.944 [2024-12-15 05:37:03.450257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.450483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.450514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.450765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.450797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.451051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.451084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.451382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.451414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.451556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.451588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.451765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.451797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.452014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.452048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.452291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.452322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.452612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.452644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.452886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.452919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.453190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.453225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.453496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.453528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.453731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.453762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.453886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.453917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.454033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.454067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.454328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.454360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.454644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.454681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.454913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.454945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.455160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.455192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.455380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.455412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.455675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.455707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.455902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.455934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.456211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.456243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.456475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.456506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.456778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.456809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.457028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.457061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.457256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.457288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.457485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.457517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.457694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.457725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.458000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.458033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.458308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.458341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.458554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.458585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.458856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.458888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.459176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.459209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.459429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.459462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.459703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.459734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.460011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.460045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.460233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.460265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.460524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.460555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.460798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.460829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.461121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.461155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.461369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.461401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.461524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.461555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.461750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.461793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.462056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.462090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.462365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.462396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.462575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.462606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.462813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.462845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.462954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.462985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.463261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.463296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.463555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.463587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.463839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.463872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.464168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.464202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.464410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.464442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.464657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.464689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.464957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.464989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.465263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.465296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.465507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.465540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.465801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.465832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.466126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.466160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.466431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.466463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.466652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.466683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.466858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.466890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.467094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.467128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.467343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.467374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.467641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.467673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.467977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.468017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.468226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.468259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.468551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.468583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.468853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.468884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.469182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.469222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.469487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.469519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.469733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.469765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.470040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.470074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.470361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.470393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.470664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.470695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.470988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.471031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.471295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.945 [2024-12-15 05:37:03.471328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.945 qpair failed and we were unable to recover it. 00:36:49.945 [2024-12-15 05:37:03.471524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.471555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.471736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.471767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.471979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.472022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.472270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.472302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.472495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.472528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.472676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.472707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.472984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.473045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.473310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.473342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.473538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.473570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.473841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.473874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.474156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.474190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.474437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.474468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.474731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.474764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.474952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.474984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.475173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.475207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.475456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.475488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.475756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.475788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.475990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.476032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.476278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.476312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.476439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.476471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.476702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.476735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.476939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.476971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.477201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.477235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.477423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.477456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.477667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.477700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.477975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.478038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.478276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.478309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.478602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.478635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.478915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.478947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.479162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.479196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.479405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.479438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.479558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.479591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.479859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.479891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.480097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.480138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.480366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.480399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.480649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.480681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.480871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.480904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.481183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.481217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.481495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.481528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.481747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.481779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.482007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.482041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.482335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.482368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.482634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.482666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.482883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.482916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.483192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.483226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.483511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.483544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.483818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.483851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.484045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.484080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.484340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.484373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.484623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.484656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.484952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.484985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.485276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.485311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.485493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.485525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.485729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.485761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.485913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.485945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.486234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.486269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.486563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.486596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.486797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.486828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.487025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.487060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.487241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.487273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.487385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.487424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.487619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.487651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.487856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.487888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.488172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.488206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.488508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.488541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.488716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.488748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.489024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.489059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.489256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.489289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.489547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.489581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.489852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.489885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.490165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.490199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.490480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.490513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.490738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.490771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.491023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.491057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.491367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.491399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.491675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.491708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.491986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.946 [2024-12-15 05:37:03.492031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.946 qpair failed and we were unable to recover it. 00:36:49.946 [2024-12-15 05:37:03.492272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.492304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.492561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.492593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.492893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.492925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.493131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.493165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.493441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.493473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.493734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.493767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.494069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.494103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.494365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.494398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.494600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.494633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.494901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.494933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.495213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.495254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.495399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.495431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.495580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.495612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.495887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.495919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.496224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.496258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.496538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.496570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.496824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.496856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.497105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.497140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.497363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.497396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.497578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.497611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.497894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.497926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.498215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.498249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.498525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.498558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.498751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.498783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.499067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.499101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.499378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.499411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.499695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.499728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.499933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.499966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.500240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.500274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.500546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.500578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.500703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.500736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.501009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.501043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.501316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.501349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.501599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.501632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.501895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.501927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.502109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.502144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.502347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.502380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.502647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.502679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.502960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.503005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.503282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.503315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.503575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.503607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.503907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.503940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.504151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.504185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.504461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.504493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.504765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.504797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.505010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.505045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.505324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.505356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.505631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.505664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.505953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.505985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.506265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.506299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.506586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.506618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.506897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.506930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.507218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.507252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.507475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.507507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.507786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.507818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.508020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.508054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.508248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.508280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.508537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.508569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.508886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.508918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.509171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.509205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.509397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.509428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.509708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.509740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.510030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.510064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.510279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.510311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.510528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.510561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.510839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.510871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.511158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.511192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.511385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.511416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.511612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.947 [2024-12-15 05:37:03.511645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.947 qpair failed and we were unable to recover it. 00:36:49.947 [2024-12-15 05:37:03.511927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.511959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.512096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.512130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.512427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.512460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.512638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.512670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.512941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.512973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.513211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.513245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.513438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.513470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.513689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.513721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.513925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.513957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.514228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.514270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.514554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.514587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.514858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.514890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.515181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.515216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.515487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.515519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.515804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.515837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.516121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.516154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.516355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.516386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.516532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.516563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.516743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.516776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.517066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.517099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.517299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.517331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.517541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.517574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.517770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.517802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.518116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.518151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.518406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.518439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.518714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.518745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.519041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.519076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.519347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.519380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.519671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.519703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.519906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.519939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.520202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.520235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.520530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.520563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.520785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.520817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.521069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.521103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.521403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.521435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.521651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.521683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.521934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.521971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.522284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.522318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.522608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.522641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.522838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.522869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.523049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.523083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.523283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.523315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.523598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.523630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.523837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.523870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.524123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.524157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.524417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.524449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.524584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.524617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.524882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.524915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.525203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.525237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.525436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.525467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.525752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.525784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.525986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.526046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.526268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.526300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.526489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.526521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.526697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.526729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.527017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.527052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.527350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.527382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.527577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.527610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.527880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.527912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.528200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.528235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.528511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.528543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.528827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.528859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.529065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.529099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.529378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.529416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.529693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.529725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.530038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.530074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.530297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.530330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.530582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.530615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.530809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.530841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.531049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.531083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.531356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.531389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.531589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.531620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.531841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.531874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.532061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.532095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.532367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.532399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.532676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.532708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.533000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.533033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.533244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.533277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.533470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.533501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.533752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.533785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.534066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.534100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.534300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.534331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.534579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.534611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.534803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.534835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.535017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.535050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.535324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.535356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.535634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.535667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.948 [2024-12-15 05:37:03.535858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.948 [2024-12-15 05:37:03.535891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.948 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.536094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.536127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.536419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.536452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.536742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.536773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.537049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.537083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.537349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.537381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.537652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.537684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.537973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.538029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.538307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.538340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.538520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.538551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.538846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.538878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.539148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.539182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.539473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.539505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.539779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.539810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.540023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.540056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.540358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.540390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.540670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.540702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.540980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.541028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.541285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.541318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.541543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.541576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.541834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.541866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.542105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.542139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.542417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.542448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.542720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.542752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.542957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.542989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.543290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.543324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.543613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.543645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.543923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.543955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.544247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.544282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.544459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.544490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.544626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.544659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.544878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.544910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.545123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.545157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.545405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.545438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.545733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.545765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.545975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.546025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.546290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.546323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.546521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.546553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.546753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.546785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.547057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.547091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.547210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.547242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.547513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.547545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.547797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.547829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.547949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.547981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.548245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.548285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.548555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.548587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.548863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.548896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.549189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.549223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.549495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.549528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.549823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.549855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.550132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.550167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.550382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.550415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.550670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.550703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.550837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.550869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.551145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.551179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.551458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.551489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.551775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.551807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.552086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.552120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.552411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.552443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.552690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.552723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.553006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.553041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.553237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.553268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.553540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.553572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.553857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.553889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.554170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.554204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.554489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.554521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.554802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.554834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.555043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.555077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.555350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.555382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.555655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.555687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.555887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.555919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.556179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.556218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.556416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.556449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.556626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.556658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.556908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.949 [2024-12-15 05:37:03.556940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.949 qpair failed and we were unable to recover it. 00:36:49.949 [2024-12-15 05:37:03.557145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.557178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.557376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.557407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.557678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.557711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.558020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.558055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.558312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.558343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.558540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.558573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.558765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.558796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.559046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.559080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.559385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.559417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.559676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.559708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.560013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.560048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.560330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.560362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.560639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.560671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.560981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.561022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.561323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.561357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.561616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.561649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.561905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.561937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.562155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.562189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.562410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.562442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.562717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.562749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.562950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.562982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.563265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.563299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.563551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.563584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.563882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.563914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.564103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.564138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.564404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.564435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.564652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.564684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.564961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.565000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.565284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.565314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.565511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.565541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.565735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.565764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.566013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.566045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.566224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.566253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.566468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.566498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.566712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.566741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.566865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.566895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.567077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.567108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.567406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.567436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.567686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.567717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.567985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.568028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.568315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.568346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.568622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.568653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.568945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.568974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.569259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.569292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.569552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.569583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.569804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.569835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.570113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.570146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.570343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.570373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.570632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.570663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.570965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.571008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.571235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.571268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.571487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.571520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.571792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.571823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.572069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.572104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.572408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.572441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.572565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.572598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.572850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.572883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.573138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.573172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.573473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.573505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.573787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.573819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.574095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.574129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.574325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.574357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.574556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.574589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.574785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.574817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.575013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.575060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.575340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.575373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.575642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.575674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.575918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.575950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.576265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.576300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.576585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.576617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.576840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.576873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.577153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.577187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.577409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.577442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.577719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.577752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.578030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.578065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.578200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.578233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.578509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.578542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.578817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.578851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.579138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.950 [2024-12-15 05:37:03.579173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.950 qpair failed and we were unable to recover it. 00:36:49.950 [2024-12-15 05:37:03.579375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.579407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.579679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.579712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.579913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.579945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.580256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.580290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.580565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.580598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.580890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.580923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.581197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.581231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.581436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.581469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.581737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.581770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.582052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.582087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.582369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.582403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.582642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.582675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.582891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.582929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.583136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.583171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.583452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.583484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.583624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.583656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.583869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.583902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.584157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.584191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.584383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.584416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.584692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.584725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.585036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.585071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.585325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.585358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.585487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.585519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.585708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.585740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.585932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.585964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.586266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.586300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.586495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.586527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.586799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.586832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.587127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.587161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.587432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.587465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.587753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.587787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.587985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.588030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.588226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.588258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.588438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.588470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.588659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.588691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.588939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.588972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.589281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.589316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.589524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.589557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.589821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.589854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.590060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.590101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.590364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.590396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.590674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.590706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.590838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.590870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.591126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.591160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.591361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.591394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.591613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.591646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.591852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.591884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.592138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.592172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.592434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.592466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.592660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.592692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.592894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.592927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.593108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.593141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.593418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.593451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.593739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.593773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.593983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.594042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.594317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.594349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.594561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.594594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.594893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.594926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.595189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.595223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.595450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.595483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.595613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.595645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.595902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.595934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.596160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.596193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.596343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.596375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.596572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.596605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.596834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.596866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.597085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.597119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.597401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.597434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.597663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.597696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.597912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.951 [2024-12-15 05:37:03.597944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.951 qpair failed and we were unable to recover it. 00:36:49.951 [2024-12-15 05:37:03.598196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-12-15 05:37:03.598230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-12-15 05:37:03.598509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-12-15 05:37:03.598542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-12-15 05:37:03.598675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-12-15 05:37:03.598707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-12-15 05:37:03.598979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-12-15 05:37:03.599026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-12-15 05:37:03.599230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-12-15 05:37:03.599262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-12-15 05:37:03.599543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-12-15 05:37:03.599575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-12-15 05:37:03.599861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-12-15 05:37:03.599894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-12-15 05:37:03.600170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-12-15 05:37:03.600205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-12-15 05:37:03.600351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-12-15 05:37:03.600382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:49.952 [2024-12-15 05:37:03.600690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:49.952 [2024-12-15 05:37:03.600722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:49.952 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.601012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.601053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.601338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.601372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.601575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.601609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.601867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.601901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.602091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.602124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.602332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.602363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.602578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.602610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.602896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.602927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.603162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.603196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.603491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.603523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.603765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.603797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.604086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.604121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.604397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.604429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.604695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.604728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.605029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.605063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.605254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.605286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.605420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.243 [2024-12-15 05:37:03.605453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.243 qpair failed and we were unable to recover it. 00:36:50.243 [2024-12-15 05:37:03.605718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.605752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.606038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.606074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.606351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.606383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.606669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.606701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.606915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.606946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.607155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.607188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.607437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.607469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.607670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.607703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.607953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.607985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.608212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.608244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.608451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.608490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.608788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.608820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.609082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.609116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.609419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.609452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.609666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.609699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.609978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.610032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.610318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.610350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.610565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.610597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.610827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.610859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.611110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.611144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.611446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.611479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.611759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.611791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.612021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.612055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.612250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.612282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.612564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.612596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.612788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.612821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.613041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.613075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.613278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.613311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.613593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.613625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.613875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.613907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.614110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.614144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.614275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.614307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.614508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.614541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.614846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.614878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.615014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.615049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.615197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.615229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.244 [2024-12-15 05:37:03.615484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.244 [2024-12-15 05:37:03.615516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.244 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.615722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.615761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.615890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.615922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.616194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.616228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.616529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.616561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.616826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.616858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.617069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.617103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.617381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.617414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.617688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.617720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.617935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.617966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.618300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.618335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.618541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.618573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.618785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.618817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.619014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.619049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.619346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.619378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.619584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.619617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.619801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.619832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.620023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.620058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.620333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.620365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.620559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.620591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.620776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.620809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.621017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.621052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.621277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.621310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.621585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.621617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.621814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.621846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.622044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.622079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.622291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.622323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.622448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.622480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.622688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.622720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.622988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.623034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.623309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.623342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.623646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.623678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.623939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.623972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.624266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.624301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.624574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.624607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.624798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.624831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.625026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.625061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.625309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.625342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.625543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.625575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.245 qpair failed and we were unable to recover it. 00:36:50.245 [2024-12-15 05:37:03.625796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.245 [2024-12-15 05:37:03.625829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.626081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.626116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.626379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.626412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.626667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.626701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.626916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.626948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.627231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.627266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.627540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.627572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.627837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.627869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.628071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.628105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.628383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.628415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.628721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.628753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.628958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.628990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.629336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.629369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.629619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.629651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.629967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.630009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.630165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.630197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.630398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.630430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.630698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.630732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.630912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.630945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.631205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.631239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.631442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.631475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.631770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.631803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.632023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.632059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.632253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.632285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.632506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.632539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.632761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.632793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.633001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.633035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.633224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.633257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.633535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.633568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.633836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.633868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.634164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.634204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.634470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.634503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.634776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.634809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.635014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.635048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.635253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.635286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.635486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.635519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.635700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.246 [2024-12-15 05:37:03.635732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.246 qpair failed and we were unable to recover it. 00:36:50.246 [2024-12-15 05:37:03.635934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.635966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.636258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.636293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.636421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.636453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.636732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.636764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.637028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.637063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.637358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.637390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.637697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.637729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.638033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.638068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.638329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.638361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.638562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.638594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.638918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.638951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.639218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.639252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.639405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.639437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.639686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.639718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.640005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.640039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.640337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.640369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.640652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.640685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.640972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.641018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.641255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.641288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.641540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.641572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.641803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.641841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.642136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.642171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.642298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.642331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.642541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.642575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.642763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.642795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.642976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.643019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.643223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.643255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.643538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.643570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.643847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.643880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.644107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.644141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.644422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.644454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.644705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.644738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.645008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.645042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.645293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.645325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.645553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.645586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.645774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.645805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.646073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.646108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.646380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.646412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.646705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.247 [2024-12-15 05:37:03.646738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.247 qpair failed and we were unable to recover it. 00:36:50.247 [2024-12-15 05:37:03.646876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.646908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.647119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.647154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.647350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.647382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.647584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.647616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.647825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.647858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.648155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.648189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.648458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.648490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.648767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.648800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.649047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.649081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.649307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.649340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.649523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.649555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.649776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.649809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.650081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.650115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.650376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.650408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.650604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.650636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.650913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.650945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.651104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.651139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.651416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.651448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.651747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.651779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.651989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.652034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.652231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.652263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.652514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.652547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.652810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.652843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.653056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.653090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.653352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.653385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.653580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.653612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.653886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.653919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.654166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.654200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.654501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.248 [2024-12-15 05:37:03.654533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.248 qpair failed and we were unable to recover it. 00:36:50.248 [2024-12-15 05:37:03.654749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.654781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.655055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.655089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.655216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.655247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.655500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.655532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.655745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.655778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.656046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.656080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.656353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.656386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.656598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.656631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.656930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.656961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.657261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.657295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.657598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.657630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.657841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.657873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.658064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.658099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.658373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.658405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.658604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.658636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.658887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.658919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.659143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.659177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.659452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.659484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.659764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.659796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.660083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.660117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.660250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.660288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.660502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.660534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.660656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.660688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.660959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.660991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.661263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.661295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.661499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.661532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.661790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.661822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.662071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.662105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.662406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.662438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.662632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.662665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.662943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.662975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.663192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.663226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.663503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.663536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.663812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.663844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.664093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.664128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.664428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.664461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.664749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.664781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.665057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.665091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.249 [2024-12-15 05:37:03.665385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.249 [2024-12-15 05:37:03.665416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.249 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.665709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.665742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.666004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.666038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.666257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.666289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.666508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.666540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.666776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.666809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.667036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.667070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.667271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.667304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.667584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.667616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.667862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.667900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.668103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.668138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.668411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.668444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.668648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.668679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.668825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.668857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.669152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.669186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.669390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.669421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.669691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.669724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.670013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.670048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.670322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.670353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.670536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.670567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.670845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.670878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.671149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.671182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.671472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.671505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.671637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.671670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.671875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.671907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.672106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.672141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.672333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.672365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.672615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.672648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.672897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.672929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.673240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.673274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.673553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.673585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.673838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.673870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.674145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.674178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.674456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.674487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.674708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.674740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.675015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.675048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.675357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.675395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.675580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.675614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.675910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.250 [2024-12-15 05:37:03.675942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.250 qpair failed and we were unable to recover it. 00:36:50.250 [2024-12-15 05:37:03.676251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.676285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.676501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.676534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.676808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.676840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.677047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.677082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.677260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.677292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.677565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.677598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.677891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.677923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.678219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.678253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.678522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.678555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.678739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.678771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.679046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.679080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.679311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.679344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.679627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.679660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.679855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.679886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.680069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.680103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.680386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.680418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.680682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.680714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.681016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.681049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.681333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.681366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.681567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.681600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.681840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.681872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.682134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.682169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.682422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.682455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.682661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.682693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.682967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.683008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.683161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.683195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.683389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.683421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.683732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.683764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.683989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.684038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.684266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.684303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.684603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.684636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.684899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.684932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.685153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.685187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.685455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.685489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.685771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.685803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.686051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.686085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.686268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.251 [2024-12-15 05:37:03.686300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.251 qpair failed and we were unable to recover it. 00:36:50.251 [2024-12-15 05:37:03.686592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.686625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.686833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.686865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.687058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.687092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.687305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.687338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.687615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.687648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.687958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.687990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.688208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.688241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.688382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.688413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.688592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.688624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.688902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.688935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.689205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.689239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.689450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.689483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.689678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.689712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.689963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.690004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.690219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.690252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.690438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.690471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.690748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.690779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.691051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.691085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.691225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.691258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.691461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.691493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.691769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.691801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.691929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.691962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.692276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.692311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.692514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.692548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.692817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.692850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.693137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.693172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.693426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.693459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.693764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.693795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.694058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.694098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.694368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.694402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.694598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.694629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.694902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.694935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.252 [2024-12-15 05:37:03.695224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.252 [2024-12-15 05:37:03.695259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.252 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.695532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.695564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.695851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.695883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.696167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.696202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.696463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.696495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.696790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.696823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.697099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.697133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.697266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.697298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.697572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.697604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.697878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.697912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.698112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.698146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.698399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.698432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.698685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.698718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.698902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.698934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.699144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.699178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.699434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.699467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.699591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.699623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.699898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.699931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.700080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.700115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.700366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.700399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.700583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.700618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.700892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.700924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.701114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.701151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.701403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.701442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.701745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.701780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.702058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.702095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.702348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.702382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.702642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.702676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.702968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.703011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.703276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.703310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.703492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.703524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.703773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.703807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.704061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.704097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.704395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.704428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.704692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.704725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.705026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.705061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.705324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.705356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.705646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.705680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.705909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.253 [2024-12-15 05:37:03.705942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.253 qpair failed and we were unable to recover it. 00:36:50.253 [2024-12-15 05:37:03.706215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.706249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.706437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.706470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.706723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.706755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.707033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.707067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.707269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.707308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.707493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.707525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.707800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.707833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.708108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.708143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.708298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.708331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.708613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.708646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.708838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.708872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.709122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.709157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.709288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.709323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.709511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.709546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.709797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.709830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.710107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.710141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.710329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.710362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.710554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.710587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.710837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.710870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.711074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.711108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.711384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.711418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.711672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.711705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.711954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.711987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.712181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.712214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.712390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.712421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.712680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.712711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.712990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.713048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.713313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.713350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.713616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.713646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.713918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.713949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.714079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.714112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.714360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.714393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.714585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.714616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.714868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.714899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.715100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.715133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.715337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.715379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.254 qpair failed and we were unable to recover it. 00:36:50.254 [2024-12-15 05:37:03.715516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.254 [2024-12-15 05:37:03.715546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.715794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.715824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.716046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.716079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.716333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.716367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.716561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.716591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.716843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.716875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.717127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.717166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.717465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.717498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.717787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.717819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.718095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.718129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.718363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.718397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.718581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.718613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.718876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.718908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.719095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.719131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.719429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.719463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.719761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.719794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.720025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.720067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.720344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.720378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.720559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.720592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.720864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.720902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.721177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.721211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.721436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.721470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.721602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.721634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.721940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.721974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.722269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.722303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.722524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.722558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.722789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.722824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.723104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.723139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.723398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.723431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.723586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.723620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.723849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.723882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.724062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.724096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.724371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.724405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.724683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.724721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.724835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.724868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.725152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.725189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.725457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.725495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.725801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.725837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.726073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.255 [2024-12-15 05:37:03.726109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.255 qpair failed and we were unable to recover it. 00:36:50.255 [2024-12-15 05:37:03.726243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.726281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.726564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.726600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.726804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.726838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.727021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.727057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.727263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.727303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.727532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.727569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.727773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.727809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.728006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.728044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.728262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.728298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.728576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.728609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.728807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.728840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.729147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.729186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.729445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.729480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.729766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.729802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.730107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.730144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.730450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.730484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.730720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.730755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.731033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.731075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.731278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.731313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.731593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.731631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.731886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.731923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.732152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.732188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.732454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.732488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.732692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.732727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.732858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.732892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.733095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.733131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.733342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.733377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.733574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.733611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.733832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.733868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.734144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.734179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.734466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.734503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.734771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.734813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.735014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.735049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.735331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.735364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.735633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.735669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.735858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.735894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.736161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.736198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.736404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.256 [2024-12-15 05:37:03.736440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.256 qpair failed and we were unable to recover it. 00:36:50.256 [2024-12-15 05:37:03.736637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.736671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.736962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.737025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.737283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.737318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.737595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.737631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.737830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.737865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.738016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.738053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.738260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.738296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.738558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.738594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.738879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.738925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.739233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.739269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.739527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.739561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.739862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.739900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.740199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.740234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.740497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.740532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.740830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.740865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.741071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.741105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.741384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.741417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.741719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.741752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.741956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.742000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.742277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.742311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.742604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.742637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.742909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.742943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.743230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.743265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.743512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.743546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.743801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.743835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.744062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.744097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.744280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.744313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.744591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.744626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.744830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.744864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.745124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.745159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.745354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.745388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.745656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.745689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.745871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.745905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.257 qpair failed and we were unable to recover it. 00:36:50.257 [2024-12-15 05:37:03.746159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.257 [2024-12-15 05:37:03.746199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.746481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.746518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.746717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.746750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.746949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.746981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.747269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.747303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.747557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.747590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.747869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.747902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.748096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.748131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.748387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.748420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.748643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.748677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.748978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.749023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.749165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.749197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.749486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.749520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.749817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.749849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.750050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.750085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.750341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.750374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.750580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.750612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.750813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.750845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.751143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.751178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.751380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.751413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.751694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.751727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.751909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.751943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.752227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.752261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.752523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.752555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.752747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.752781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.753080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.753114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.753380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.753414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.753686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.753720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.753903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.753941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.754145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.754179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.754455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.754488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.754700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.754732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.755029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.755063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.755337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.755372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.755647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.755679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.755895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.755929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.756214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.756249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.756523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.258 [2024-12-15 05:37:03.756555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.258 qpair failed and we were unable to recover it. 00:36:50.258 [2024-12-15 05:37:03.756763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.756798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.757054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.757089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.757304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.757336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.757586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.757620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.757823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.757855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.758078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.758112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.758334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.758367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.758644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.758676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.758963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.759004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.759211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.759245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.759521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.759553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.759826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.759858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.760102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.760137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.760385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.760417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.760611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.760645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.760922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.760955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.761258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.761293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.761511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.761548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.761805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.761838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.762134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.762170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.762445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.762478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.762706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.762740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.762975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.763018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.763292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.763326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.763597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.763630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.763837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.763870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.764074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.764108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.764325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.764358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.764559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.764591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.764852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.764885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.765110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.765144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.765268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.765300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.765592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.765626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.765925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.765959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.766226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.766260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.766452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.766485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.766765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.766799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.767077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.767111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.767294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.259 [2024-12-15 05:37:03.767325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.259 qpair failed and we were unable to recover it. 00:36:50.259 [2024-12-15 05:37:03.767595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.767628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.767810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.767843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.768119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.768153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.768371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.768404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.768601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.768632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.768823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.768856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.769135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.769171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.769423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.769455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.769719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.769751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.769957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.769990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.770183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.770215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.770440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.770473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.770721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.770754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.771042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.771077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.771377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.771410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.771676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.771710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.772003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.772037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.772309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.772343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.772629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.772660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.772909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.772942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.773230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.773266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.773541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.773575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.773854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.773886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.774170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.774205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.774403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.774436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.774710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.774743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.774869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.774902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.775179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.775215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.775396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.775427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.775622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.775655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.775929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.775963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.776238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.776271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.776536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.776570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.776767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.776800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.777006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.777039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.777223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.260 [2024-12-15 05:37:03.777259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.260 qpair failed and we were unable to recover it. 00:36:50.260 [2024-12-15 05:37:03.777462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.777494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.777621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.777655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.777908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.777941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.778154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.778187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.778318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.778351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.778627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.778660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.778844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.778876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.779068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.779103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.779378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.779411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.779609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.779641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.779771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.779811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.779942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.779976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.780116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.780148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.780419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.780452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.780652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.780686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.780868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.780900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.781168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.781203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.781477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.781509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.781710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.781742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.781946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.781979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.782246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.782279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.782460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.782494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.782792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.782826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.783084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.783119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.783385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.783419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.783622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.783656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.783925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.783958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.784239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.784274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.784557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.784590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.784874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.784906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.785125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.785160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.785432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.785466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.785751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.785783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.786084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.786119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.786381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.786415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.786616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.261 [2024-12-15 05:37:03.786648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.261 qpair failed and we were unable to recover it. 00:36:50.261 [2024-12-15 05:37:03.786915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.786949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.787142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.787182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.787437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.787469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.787722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.787755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.787958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.788002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.788289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.788323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.788524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.788557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.788760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.788792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.789069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.789103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.789407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.789440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.789718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.789751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.790032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.790065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.790348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.790381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.790579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.790612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.790864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.790897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.791138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.791172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.791397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.791431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.791626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.791658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.791858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.791891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.792147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.792182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.792484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.792516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.792780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.792812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.793112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.793147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.793440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.793472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.793659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.793692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.793893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.793926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.794205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.794238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.794461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.794494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.794749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.794788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.794969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.262 [2024-12-15 05:37:03.795011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.262 qpair failed and we were unable to recover it. 00:36:50.262 [2024-12-15 05:37:03.795291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.795324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.795522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.795556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.795785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.795818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.796006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.796040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.796294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.796326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.796580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.796613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.796884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.796918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.797197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.797233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.797493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.797525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.797717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.797749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.797953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.797987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.798179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.798212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.798498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.798532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.798788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.798821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.799084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.799118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.799417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.799449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.799716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.799749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.800045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.800079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.800262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.800295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.800564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.800597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.800874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.800908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.801062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.801097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.801376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.801409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.801704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.801738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.802031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.802065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.802332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.802366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.802633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.802667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.802967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.803009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.803215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.803249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.803374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.803407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.803686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.803719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.803971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.804014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.804240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.804273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.804541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.804573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.804866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.804899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.263 [2024-12-15 05:37:03.805173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.263 [2024-12-15 05:37:03.805207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.263 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.805471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.805503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.805798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.805831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.806107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.806141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.806349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.806383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.806640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.806673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.806871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.806904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.807158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.807193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.807413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.807446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.807638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.807670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.807921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.807954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.808261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.808295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.808553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.808586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.808775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.808808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.809015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.809048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.809318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.809351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.809620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.809654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.809879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.809911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.810193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.810228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.810525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.810558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.810749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.810781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.811038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.811072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.811372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.811405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.811599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.811631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.811915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.811947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.812158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.812192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.812476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.812507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.812788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.812822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.813075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.813110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.813380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.813411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.813640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.813673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.813868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.813906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.814157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.814191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.814491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.814523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.814816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.814850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.815124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.815159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.815417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.264 [2024-12-15 05:37:03.815449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.264 qpair failed and we were unable to recover it. 00:36:50.264 [2024-12-15 05:37:03.815707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.815741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.815939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.815972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.816223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.816257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.816450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.816483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.816714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.816747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.817049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.817084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.817347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.817380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.817655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.817687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.818014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.818049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.818347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.818381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.818559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.818591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.818808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.818840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.819058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.819093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.819326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.819358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.819568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.819600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.819852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.819886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.820083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.820116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.820313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.820346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.820615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.820649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.820797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.820829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.821083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.821118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.821394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.821433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.821733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.821765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.821976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.822032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.822335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.822369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.822642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.822675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.822923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.822957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.823269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.823304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.823599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.823632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.823896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.823928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.824111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.824145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.824397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.824430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.824679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.824711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.825014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.825048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.825179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.265 [2024-12-15 05:37:03.825213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.265 qpair failed and we were unable to recover it. 00:36:50.265 [2024-12-15 05:37:03.825409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.825441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.825712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.825745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.825888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.825922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.826144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.826178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.826384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.826418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.826610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.826643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.826785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.826817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.827095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.827130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.827405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.827437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.827686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.827718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.827924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.827957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.828242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.828279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.828484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.828517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.828716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.828749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.829012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.829046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.829296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.829329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.829606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.829640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.829920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.829952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.830241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.830276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.830477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.830510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.830709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.830740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.830955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.830987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.831267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.831300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.831528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.831562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.831755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.831787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.832078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.832113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.832386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.832418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.832615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.832648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.832930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.832963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.833251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.833284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.833534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.833566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.833760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.833792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.834044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.834078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.834379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.834412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.834615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.834648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.834765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.266 [2024-12-15 05:37:03.834797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.266 qpair failed and we were unable to recover it. 00:36:50.266 [2024-12-15 05:37:03.835033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.835067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.835352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.835386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.835644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.835677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.835954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.835986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.836177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.836209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.836400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.836433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.836563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.836595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.836817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.836849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.837100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.837135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.837397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.837430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.837723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.837756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.837962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.838002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.838339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.838372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.838647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.838679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.838934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.838966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.839277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.839310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.839602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.839635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.839826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.839858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.840050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.840090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.840288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.840321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.840600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.840631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.840907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.840939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.841176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.841210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.841409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.841441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.841672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.841704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.841926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.841958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.842226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.842260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.842555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.842588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.842858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.842890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.843038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.843073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.843354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.843387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.843706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.267 [2024-12-15 05:37:03.843738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.267 qpair failed and we were unable to recover it. 00:36:50.267 [2024-12-15 05:37:03.843961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.844003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.844295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.844327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.844611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.844643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.844895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.844928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.845227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.845261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.845480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.845512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.845724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.845756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.846030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.846064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.846343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.846376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.846597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.846630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.846906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.846938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.847254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.847289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.847570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.847604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.847884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.847922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.848197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.848232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.848514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.848547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.848854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.848885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.849143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.849177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.849478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.849510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.849779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.849811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.850111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.850145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.850413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.850446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.850642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.850674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.850876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.850908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.851110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.851145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.851417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.851449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.851731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.851764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.851969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.852013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.852270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.852303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.852580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.852612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.852885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.852917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.853138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.853173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.853356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.853388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.853660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.853692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.853965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.854005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.854291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.854323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.854533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.854565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.268 [2024-12-15 05:37:03.854817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.268 [2024-12-15 05:37:03.854850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.268 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.855108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.855141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.855342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.855375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.855644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.855687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.855936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.855969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.856300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.856335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.856581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.856613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.856926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.856958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.857225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.857260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.857512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.857544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.857813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.857845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.858043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.858078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.858278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.858310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.858591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.858624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.858873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.858905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.859103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.859138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.859334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.859366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.859624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.859657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.859954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.859986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.860280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.860314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.860586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.860619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.860843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.860875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.861155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.861189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.861382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.861415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.861563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.861594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.861894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.861926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.862122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.862156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.862349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.862381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.862568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.862601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.862850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.862882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.863077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.863111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.863393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.863426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.863607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.863639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.863913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.863945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.864225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.864259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.864546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.269 [2024-12-15 05:37:03.864579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.269 qpair failed and we were unable to recover it. 00:36:50.269 [2024-12-15 05:37:03.864860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.864891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.865174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.865208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.865462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.865495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.865769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.865801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.866012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.866046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.866247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.866280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.866498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.866530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.866780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.866812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.867030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.867065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.867325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.867357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.867662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.867694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.867824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.867856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.868006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.868040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.868228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.868260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.868485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.868517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.868710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.868743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.869044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.869079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.869360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.869391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.869670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.869702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.869988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.870033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.870300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.870332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.870519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.870551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.870832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.870865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.871133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.871169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.871460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.871492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.871761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.871794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.872044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.872078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.872284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.872316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.872540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.872573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.872775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.872808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.872961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.873002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.873281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.873314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.873584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.873616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.873930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.873962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.874245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.270 [2024-12-15 05:37:03.874280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.270 qpair failed and we were unable to recover it. 00:36:50.270 [2024-12-15 05:37:03.874556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.874594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.874778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.874810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.875079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.875113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.875394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.875426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.875705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.875737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.876027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.876061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.876260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.876291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.876544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.876576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.876702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.876735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.877013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.877046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.877230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.877263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.877479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.877512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.877787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.877819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.878085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.878119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.878415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.878448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.878733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.878766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.879043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.879078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.879227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.879260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.879479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.879511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.879762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.879794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.880058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.880094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.880345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.880377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.880556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.880588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.880766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.880799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.881084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.881118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.881381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.881415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.881619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.881651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.881831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.881869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.882141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.882175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.882445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.882477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.882701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.882733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.882914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.882947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.883139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.883173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.883454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.883487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.883757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.883790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.883979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.271 [2024-12-15 05:37:03.884023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.271 qpair failed and we were unable to recover it. 00:36:50.271 [2024-12-15 05:37:03.884226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.884258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.884441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.884473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.884748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.884780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.885051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.885086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.885380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.885414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.885557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.885589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.885718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.885751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.886016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.886050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.886188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.886220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.886494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.886526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.886773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.886806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.887074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.887109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.887304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.887337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.887589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.887623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.887761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.887793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.888068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.888102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.888379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.888412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.888696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.888728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.889016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.889050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.889322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.889355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.889641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.889673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.889882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.889915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.890116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.890151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.890356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.890388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.890588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.890620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.890840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.890874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.891091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.891125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.891374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.891406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.891669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.891704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.891902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.891934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.892283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.892317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.892527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.892561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.892860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.272 [2024-12-15 05:37:03.892892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.272 qpair failed and we were unable to recover it. 00:36:50.272 [2024-12-15 05:37:03.893163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.893198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.893486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.893520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.893790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.893823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.894023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.894058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.894319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.894353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.894529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.894562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.894673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.894705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.894884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.894918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.895192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.895226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.895499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.895532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.895767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.895800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.896052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.896086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.896320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.896353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.896634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.896666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.896846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.896878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.897074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.897109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.897381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.897412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.897651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.897683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.897932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.897965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.898163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.898195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.898449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.898482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.898675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.898708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.898900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.898933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.899212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.899247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.899449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.899481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.899780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.899813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.900080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.900121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.900319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.900350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.900605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.900638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.900888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.900921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.901221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.901255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.901468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.901501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.901628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.901662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.273 qpair failed and we were unable to recover it. 00:36:50.273 [2024-12-15 05:37:03.901940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.273 [2024-12-15 05:37:03.901972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.902284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.902319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.902589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.902622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.902866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.902900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.903234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.903269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.903477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.903509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.903748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.903781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.903913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.903946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.904170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.904204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.904494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.904526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.904824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.904857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.905009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.905044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.905328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.905361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.905555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.905589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.905771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.905803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.906054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.906089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.906222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.906255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.906405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.906437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.906647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.906679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.906810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.906844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.907064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.907104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.907303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.907338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.907533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.907566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.907768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.907801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.908013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.908048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.908350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.908382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.908648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.908680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.908936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.908968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.909169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.909203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.909406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.909439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.274 [2024-12-15 05:37:03.909632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.274 [2024-12-15 05:37:03.909665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.274 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.909875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.909909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.910187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.910222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.910420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.910456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.910662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.910695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.910903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.910936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.911205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.911241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.911556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.911589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.911883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.911917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.912192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.912228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.912511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.912544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.912754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.912786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.912985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.913031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.913235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.913268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.913567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.913599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.913793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.913827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.914023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.914057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.914261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.914300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.914601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.914634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.914758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.914792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.915003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.915037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.915254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.915287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.915501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.570 [2024-12-15 05:37:03.915533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.570 qpair failed and we were unable to recover it. 00:36:50.570 [2024-12-15 05:37:03.915726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.915762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.916047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.916085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.916375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.916411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.916703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.916739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.917016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.917053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.917336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.917370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.917639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.917672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.917911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.917943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.918262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.918296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.918413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.918446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.918705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.918738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.919011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.919046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.919231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.919264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.919487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.919520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.919715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.919748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.919934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.919967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.920205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.920240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.920391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.920425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.920718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.920751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.920951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.920984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.921302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.921335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.921470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.921503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.921694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.921727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.921981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.922041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.922249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.922282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.922562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.922595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.922881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.922914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.923093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.923128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.923344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.923377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.923653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.923686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.923866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.923899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.924174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.924209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.924482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.924515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.924791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.924824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.924956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.924990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.925129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.925162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.925293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.571 [2024-12-15 05:37:03.925327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.571 qpair failed and we were unable to recover it. 00:36:50.571 [2024-12-15 05:37:03.925614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.925649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.925883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.925916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.926111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.926145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.926280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.926313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.926519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.926552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.926753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.926786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.926979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.927026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.927301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.927334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.927637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.927670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.927868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.927902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.928111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.928147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.928410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.928443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.928698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.928732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.928982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.929028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.929238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.929271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.929523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.929556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.929748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.929780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.930063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.930098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.930398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.930431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.930700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.930733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.931032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.931066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.931336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.931370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.931654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.931687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.931970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.932029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.932291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.932324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.932524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.932563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.932763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.932797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.933068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.933103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.933405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.933438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.933732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.933766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.934020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.934054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.934253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.934286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.934481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.934514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.934727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.934760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.934959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.572 [2024-12-15 05:37:03.935000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.572 qpair failed and we were unable to recover it. 00:36:50.572 [2024-12-15 05:37:03.935273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.935307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.935608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.935641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.935847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.935880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.936142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.936177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.936326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.936360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.936567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.936600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.936870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.936903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.937179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.937214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.937541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.937575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.937865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.937897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.938124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.938158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.938437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.938470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.938801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.938833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.939116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.939151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.939333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.939366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.939638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.939671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.939944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.939977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.940181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.940220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.940402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.940435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.940716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.940748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.941016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.941051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.941272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.941305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.941577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.941610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.941900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.941933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.942223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.942257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.942528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.942561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.942690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.942723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.942922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.942955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.943239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.943274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.943524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.943557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.943827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.943860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.944144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.944179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.944374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.944407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.944586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.944619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.944818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.944850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.945102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.573 [2024-12-15 05:37:03.945138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.573 qpair failed and we were unable to recover it. 00:36:50.573 [2024-12-15 05:37:03.945333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.945366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.945581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.945615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.945800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.945833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.946110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.946145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.946360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.946394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.946522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.946555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.946877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.946910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.947190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.947226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.947477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.947510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.947808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.947842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.948139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.948174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.948378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.948412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.948692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.948729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.949034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.949070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.949264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.949298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.949485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.949525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.949734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.949770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.950024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.950060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.950337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.950377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.950520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.950559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.950819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.950855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.951110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.951145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.951279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.951312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.951589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.951624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.951932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.951968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.952280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.952315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.952590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.952627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.952907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.952942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.953151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.953186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.953487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.953523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.953798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.953835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.954044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.954080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.954228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.954261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.954466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.954506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.954763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.954794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.574 [2024-12-15 05:37:03.954973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.574 [2024-12-15 05:37:03.955016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.574 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.955301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.955335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.955523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.955557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.955817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.955850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.956132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.956168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.956298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.956337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.956564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.956599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.956852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.956885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.957162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.957197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.957383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.957418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.957684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.957718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.958024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.958057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.958317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.958353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.958561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.958599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.958786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.958829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.959082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.959117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.959379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.959413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.959619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.959654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.959911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.959947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.960091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.960126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.960335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.960371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.960651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.960687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.960912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.960947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.961233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.961268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.961560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.961596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.961872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.961908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.962140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.962178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.962450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.962486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.962773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.962807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.963035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.963072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.575 [2024-12-15 05:37:03.963212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.575 [2024-12-15 05:37:03.963248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.575 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.963473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.963509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.963649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.963685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.964011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.964048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.964179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.964213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.964344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.964377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.964634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.964668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.965019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.965056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.965353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.965388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.965673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.965715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.965987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.966047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.966307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.966356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.966496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.966533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.966720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.966756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.966957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.967009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.967266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.967300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.967579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.967620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.967856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.967890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.968084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.968120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.968324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.968364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.968569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.968605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.968814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.968849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.969070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.969105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.969261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.969298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.969555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.969591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.969840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.969880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.970160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.970197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.970435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.970469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.970748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.970785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.971073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.971109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.971387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.971427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.971718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.971752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.971879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.971911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.972109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.972144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.972413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.972449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.972646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.972680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.972867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.576 [2024-12-15 05:37:03.972907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.576 qpair failed and we were unable to recover it. 00:36:50.576 [2024-12-15 05:37:03.973138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.973178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.973371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.973412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.973592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.973625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.973745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.973778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.973900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.973933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.974204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.974240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.974445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.974479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.974701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.974734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.974921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.974957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.975087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.975121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.975400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.975434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.975686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.975720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.975935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.975968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.976272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.976309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.976569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.976606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.976900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.976934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.977225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.977260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.977466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.977502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.977660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.977695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.977893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.977929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.978165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.978200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.978478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.978514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.978701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.978735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.979008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.979043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.979239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.979275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.979478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.979514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.979717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.979759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.979952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.979986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.980201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.980241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.980381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.980414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.980718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.980752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.981042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.981081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.981380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.981416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.981671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.981705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.577 [2024-12-15 05:37:03.981899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.577 [2024-12-15 05:37:03.981932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.577 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.982076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.982110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.982387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.982421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.982615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.982648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.982829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.982862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.983054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.983089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.983362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.983396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.983613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.983646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.983761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.983795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.984003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.984037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.984289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.984322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.984539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.984572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.984764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.984797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.985099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.985133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.985419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.985452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.985756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.985789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.986050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.986084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.986327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.986360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.986660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.986694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.986967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.987010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.987274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.987307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.987521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.987556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.987838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.987871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.988078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.988112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.988345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.988378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.988677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.988710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.988975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.989037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.989266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.989300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.989553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.989586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.989865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.989899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.990181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.990215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.990426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.990458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.990583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.990615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.990798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.990831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.991034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.991067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.991368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.578 [2024-12-15 05:37:03.991408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.578 qpair failed and we were unable to recover it. 00:36:50.578 [2024-12-15 05:37:03.991680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.991714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.991927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.991960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.992201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.992236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.992438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.992471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.992650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.992682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.992957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.992990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.993223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.993256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.993534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.993567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.993839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.993872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.994146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.994181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.994469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.994503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.994786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.994823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.995107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.995144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.995422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.995458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.995739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.995774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.995919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.995955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.996177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.996213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.996469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.996502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.996765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.996804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.997058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.997096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.997293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.997326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.997598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.997631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.997912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.997945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.998232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.998271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.998494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.998529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.998808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.998841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.999045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.999086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.999290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.999324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.999515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.999548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.999738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:03.999770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:03.999971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:04.000015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:04.000295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:04.000330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:04.000515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:04.000547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:04.000741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:04.000774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:04.000986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:04.001052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:04.001359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.579 [2024-12-15 05:37:04.001393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.579 qpair failed and we were unable to recover it. 00:36:50.579 [2024-12-15 05:37:04.001672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.001705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.001956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.001991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.002202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.002236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.002361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.002394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.002585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.002619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.002959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.003005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.003137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.003170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.003492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.003525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.003803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.003837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.004041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.004077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.004379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.004413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.004608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.004640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.004924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.004959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.005167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.005202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.005491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.005524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.005659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.005692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.005949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.005983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.006254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.006287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.006434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.006467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.006651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.006685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.006963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.007006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.007220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.007256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.007511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.007547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.007824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.007862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.008161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.008199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.008467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.008502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.008818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.008852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.009090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.009128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.009395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.009429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.009711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.009746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.580 qpair failed and we were unable to recover it. 00:36:50.580 [2024-12-15 05:37:04.010032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.580 [2024-12-15 05:37:04.010071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.010339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.010374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.010587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.010621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.010804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.010837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.011062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.011100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.011238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.011278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.011555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.011588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.011866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.011902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.012187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.012224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.012432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.012466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.012658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.012692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.012968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.013015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.013286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.013330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.013602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.013635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.013941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.013977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.014183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.014217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.014412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.014449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.014704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.014740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.014953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.014987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.015229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.015264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.015472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.015507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.015709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.015745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.016020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.016056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.016338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.016374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.016628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.016663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.016966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.017027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.017231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.017265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.017568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.017601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.017801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.017845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.018122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.018160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.018358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.018396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.018547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.018588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.018841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.018880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.019139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.019173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.019452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.019484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.019760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.019798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.020059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.581 [2024-12-15 05:37:04.020097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.581 qpair failed and we were unable to recover it. 00:36:50.581 [2024-12-15 05:37:04.020330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.020366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.020649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.020685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.020962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.021026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.021316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.021353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.021585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.021621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.021849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.021882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.022080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.022119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.022377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.022412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.022645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.022678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.022927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.022960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.023263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.023302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.023488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.023523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.023778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.023812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.023944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.023984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.024256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.024293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.024555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.024589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.024882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.024917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.025189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.025224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.025509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.025550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.025817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.025852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.026131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.026165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.026381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.026415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.026599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.026633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.026826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.026859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.027130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.027165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.027452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.027486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.027741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.027774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.027975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.028022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.028295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.028329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.028527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.028559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.028818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.028851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.029108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.029144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.029446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.029480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.029740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.029774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.030066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.030100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.030371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.030405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.030716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.582 [2024-12-15 05:37:04.030750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.582 qpair failed and we were unable to recover it. 00:36:50.582 [2024-12-15 05:37:04.031003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.031037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.031317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.031350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.031554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.031587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.031860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.031893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.032106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.032141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.032417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.032450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.032710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.032744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.033016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.033050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.033252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.033291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.033572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.033605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.033814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.033847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.034046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.034081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.034358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.034392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.034613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.034647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.034926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.034960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.035263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.035298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.035558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.035592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.035794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.035828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.036103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.036138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.036396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.036429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.036663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.036696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.036921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.036953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.037209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.037289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.037549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.037586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.037860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.037896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.038223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.038263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.038466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.038505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.038757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.038794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.039094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.039131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.039387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.039422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.039728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.039768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.040018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.040055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.040347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.040382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.040654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.040688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.040975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.583 [2024-12-15 05:37:04.041017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.583 qpair failed and we were unable to recover it. 00:36:50.583 [2024-12-15 05:37:04.041287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.041332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.041606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.041639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.041849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.041882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.042081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.042115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.042314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.042348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.042546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.042579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.042856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.042889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.043034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.043070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.043272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.043306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.043596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.043630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.043827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.043860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.044117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.044151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.044368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.044401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.044701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.044734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.044936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.044970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.045262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.045297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.045585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.045617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.045810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.045843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.045957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.045989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.046191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.046225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.046435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.046469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.046719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.046752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.047014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.047049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.047321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.047354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.047642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.047675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.047949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.047981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.048243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.048278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.048627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.048707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.048941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.048979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.049285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.049319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.049461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.049494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.049745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.049778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.050080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.050114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.050363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.050397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.050706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.050739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.584 qpair failed and we were unable to recover it. 00:36:50.584 [2024-12-15 05:37:04.050952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.584 [2024-12-15 05:37:04.050984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.051258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.051292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.051568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.051601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.051793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.051825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.052084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.052119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.052397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.052430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.052635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.052668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.052864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.052896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.053101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.053136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.053411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.053443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.053709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.053741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.053931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.053965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.054201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.054234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.054417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.054450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.054718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.054750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.055012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.055047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.055244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.055277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.055545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.055577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.055868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.055901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.056174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.056215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.056492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.056524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.056805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.056838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.057121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.057155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.057354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.057386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.057604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.057637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.057855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.057888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.058141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.058175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.058377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.058409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.058588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.058622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.058807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.058839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.059109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.059144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.059338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.059371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.585 [2024-12-15 05:37:04.059552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.585 [2024-12-15 05:37:04.059584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.585 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.059862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.059895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.060116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.060151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.060409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.060442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.060746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.060778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.061081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.061116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.061379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.061412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.061633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.061666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.061967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.062007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.062284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.062316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.062597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.062631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.062840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.062872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.063151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.063186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.063318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.063351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.063608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.063647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.063850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.063883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.064170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.064205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.064484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.064516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.064697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.064730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.065010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.065045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.065311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.065344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.065627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.065659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.065944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.065977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.066255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.066288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.066561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.066593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.066889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.066923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.067192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.067225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.067438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.067470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.067725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.067759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.068060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.068094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.068315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.068347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.068557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.068590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.068771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.068804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.068984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.069045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.069246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.069280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.069577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.069610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.069820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.069853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.070058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.070093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.070292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.070324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.586 qpair failed and we were unable to recover it. 00:36:50.586 [2024-12-15 05:37:04.070600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.586 [2024-12-15 05:37:04.070633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.070902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.070935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.071207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.071247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.071532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.071565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.071855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.071888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.072169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.072203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.072432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.072466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.072740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.072772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.073006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.073040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.073233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.073266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.073450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.073482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.073732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.073765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.073970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.074012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.074197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.074230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.074493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.074525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.074801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.074835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.075054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.075088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.075275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.075307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.075579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.075612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.075883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.075916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.076211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.076246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.076518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.076551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.076752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.076785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.077043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.077078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.077373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.077407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.077671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.077704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.077984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.078029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.078311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.078344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.078560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.078594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.078872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.078905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.079172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.079206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.079486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.079518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.079641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.079675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.079925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.079957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.080246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.080280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.587 qpair failed and we were unable to recover it. 00:36:50.587 [2024-12-15 05:37:04.080557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.587 [2024-12-15 05:37:04.080590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.080794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.080827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.081095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.081130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.081338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.081372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.081646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.081678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.081967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.082009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.082278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.082312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.082427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.082459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.082734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.082768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.083046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.083081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.083301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.083333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.083615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.083648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.083925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.083958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.084243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.084276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.084494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.084526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.084720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.084753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.085030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.085064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.085339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.085373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.085640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.085673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.085929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.085961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.086196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.086230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.086450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.086484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.086745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.086778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.087071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.087106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.087375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.087409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.087716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.087748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.087951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.087983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.088136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.088170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.088457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.088489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.088767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.088799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.088917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.088949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.089156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.089191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.089443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.089477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.089725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.089758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.089942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.089976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.090169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.090214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.090488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.090521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.090783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.090816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.588 [2024-12-15 05:37:04.091011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.588 [2024-12-15 05:37:04.091046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.588 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.091228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.091260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.091540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.091572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.091844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.091878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.092168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.092203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.092477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.092509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.092727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.092761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.092899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.092931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.093240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.093274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.093554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.093587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.093866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.093899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.094099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.094135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.094392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.094425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.094634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.094667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.094926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.094959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.095245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.095280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.095564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.095596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.095848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.095880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.096091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.096126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.096269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.096301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.096590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.096622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.096841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.096874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.097129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.097162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.097430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.097463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.097758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.097798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.098081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.098116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.098392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.098424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.098718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.098751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.099045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.099079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.099347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.099379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.099673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.099705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.099889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.099921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.589 [2024-12-15 05:37:04.100202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.589 [2024-12-15 05:37:04.100237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.589 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.100435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.100469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.100770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.100802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.100988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.101046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.101325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.101358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.101626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.101659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.101954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.101987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.102288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.102322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.102586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.102618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.102917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.102950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.103229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.103264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.103515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.103548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.103768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.103801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.104078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.104112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.104305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.104338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.104517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.104549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.104850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.104882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.105113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.105147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.105399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.105431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.105737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.105771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.106036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.106071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.106265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.106298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.106510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.106544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.106725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.106757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.107012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.107046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.107328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.107362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.107617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.107649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.107917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.107950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.108214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.108247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.108523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.108555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.108749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.108782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.109046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.109080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.109371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.109404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.109679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.109713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.110006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.110040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.110227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.110260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.110531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.110564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.110765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.110797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.111047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.111081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.590 [2024-12-15 05:37:04.111359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.590 [2024-12-15 05:37:04.111393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.590 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.111591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.111623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.111897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.111929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.112227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.112262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.112528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.112560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.112814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.112847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.113148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.113183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.113454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.113488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.113770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.113804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.114083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.114117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.114399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.114432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.114718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.114750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.114956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.114989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.115250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.115284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.115485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.115519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.115794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.115826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.116105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.116138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.116424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.116457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.116684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.116718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.116968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.117011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.117215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.117248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.117517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.117556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.117825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.117858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.118043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.118077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.118348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.118382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.118684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.118717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.118978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.119020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.119211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.119245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.119513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.119545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.119764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.119797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.120069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.120104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.120334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.120366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.120616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.120649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.120910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.120943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.121186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.121222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.121478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.591 [2024-12-15 05:37:04.121511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.591 qpair failed and we were unable to recover it. 00:36:50.591 [2024-12-15 05:37:04.121790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.121823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.122017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.122051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.122305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.122338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.122539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.122573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.122851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.122883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.123083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.123118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.123298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.123331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.123607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.123639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.123928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.123961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.124260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.124295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.124561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.124593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.124819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.124851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.125032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.125074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.125269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.125301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.125521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.125553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.125696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.125728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.126017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.126051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.126182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.126214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.126462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.126496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.126610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.126641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.126926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.126959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.127267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.127303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.127506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.127539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.127821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.127853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.128131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.128166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.128363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.128396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.128599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.128632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.128828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.128861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.129136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.129170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.129450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.129482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.129688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.129722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.129961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.130003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.130283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.130315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.130507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.130540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.130756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.130788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.130970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.131011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.131276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.131309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.131580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.131612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.131814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.592 [2024-12-15 05:37:04.131846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.592 qpair failed and we were unable to recover it. 00:36:50.592 [2024-12-15 05:37:04.132054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.132094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.132295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.132327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.132531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.132564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.132745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.132778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.132973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.133033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.133183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.133216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.133466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.133499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.133798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.133830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.134114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.134148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.134361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.134395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.134675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.134707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.134987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.135030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.135303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.135336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.135620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.135652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.135912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.135946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.136245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.136280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.136542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.136574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.136767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.136800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.137006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.137041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.137318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.137350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.137627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.137660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.137950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.137984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.138255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.138288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.138581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.138613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.138842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.138875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.139129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.139164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.139438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.139471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.139723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.139757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.140062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.140097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.140387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.140420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.140603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.140635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.140910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.140942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.141153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.141188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.141379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.141411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.141682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.141714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.141986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.142031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.142331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.593 [2024-12-15 05:37:04.142364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.593 qpair failed and we were unable to recover it. 00:36:50.593 [2024-12-15 05:37:04.142618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.142651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.142933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.142966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.143272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.143305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.143517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.143550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.143833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.143866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.144010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.144044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.144347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.144380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.144601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.144633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.144838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.144871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.145123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.145158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.145433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.145466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.145668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.145700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.145835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.145868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.146071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.146105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.146378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.146410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.146684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.146717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.146942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.146975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.147264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.147297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.147573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.147607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.147835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.147867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.148067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.148101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.148399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.148432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.148701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.148733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.149026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.149061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.149340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.149373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.149554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.149586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.149777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.149810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.150086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.150120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.150341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.150374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.150568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.150602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.150875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.150907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.151124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.151164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.151458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.151492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.594 qpair failed and we were unable to recover it. 00:36:50.594 [2024-12-15 05:37:04.151759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.594 [2024-12-15 05:37:04.151791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.152015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.152049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.152319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.152353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.152571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.152603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.152852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.152886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.153164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.153198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.153476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.153510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.153773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.153805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.154080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.154115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.154395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.154428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.154635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.154667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.154879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.154912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.155059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.155094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.155374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.155407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.155708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.155741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.155989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.156032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.156336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.156368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.156646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.156679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.156888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.156921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.157176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.157211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.157434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.157467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.157722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.157754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.157951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.157983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.158174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.158208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.158463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.158496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.158690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.158728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.158953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.158987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.159140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.159173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.159372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.159404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.159671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.159704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.160010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.160044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.160295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.160328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.160533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.160566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.160786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.160818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.161046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.161081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.161263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.161296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.161559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.161591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.161876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.161908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.162194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.595 [2024-12-15 05:37:04.162230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.595 qpair failed and we were unable to recover it. 00:36:50.595 [2024-12-15 05:37:04.162505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.162538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.162831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.162863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.163136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.163170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.163462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.163494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.163787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.163819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.164094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.164130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.164411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.164443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.164700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.164732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.165033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.165068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.165332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.165364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.165660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.165693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.165885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.165917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.166139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.166172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.166445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.166478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.166707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.166739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.167001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.167036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.167233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.167265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.167533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.167565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.167815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.167848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.168153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.168189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.168448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.168480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.168680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.168712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.168982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.169036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.169237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.169269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.169533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.169565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.169770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.169802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.170063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.170098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.170233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.170266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.170563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.170597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.170863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.170895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.171191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.171224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.171498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.171532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.171780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.171812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.172087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.172121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.172394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.596 [2024-12-15 05:37:04.172427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.596 qpair failed and we were unable to recover it. 00:36:50.596 [2024-12-15 05:37:04.172716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.172748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.172941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.172973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.173257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.173291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.173576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.173609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.173864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.173896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.174150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.174185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.174384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.174417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.174597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.174629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.174852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.174885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.175162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.175196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.175477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.175510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.175791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.175824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.176109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.176144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.176342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.176376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.176627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.176659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.176941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.176973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.177273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.177307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.177500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.177532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.177715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.177747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.178023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.178064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.178329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.178361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.178584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.178616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.178814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.178847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.179104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.179138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.179335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.179369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.179644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.179676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.179825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.179857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.180057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.180092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.180364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.180396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.180675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.180708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.180908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.180940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.181141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.181175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.181397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.181430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.181706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.181738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.182008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.182044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.182244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.182277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.182545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.182577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.597 qpair failed and we were unable to recover it. 00:36:50.597 [2024-12-15 05:37:04.182756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.597 [2024-12-15 05:37:04.182789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.182990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.183036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.183288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.183321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.183542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.183575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.183829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.183861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.184125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.184160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.184289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.184322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.184521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.184554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.184828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.184860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.185115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.185156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.185411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.185443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.185694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.185727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.186009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.186044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.186346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.186378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.186634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.186667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.186877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.186910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.187175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.187209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.187506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.187539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.187790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.187823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.187953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.187984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.188212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.188246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.188444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.188477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.188680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.188713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.188915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.188947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.189174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.189209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.189460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.189492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.189770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.189802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.190013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.190047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.190239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.190271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.190470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.190502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.190689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.190723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.191007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.191041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.191293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.191326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.191539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.191572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.191850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.191883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.192168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.192203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.192503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.192543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.192727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.598 [2024-12-15 05:37:04.192759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.598 qpair failed and we were unable to recover it. 00:36:50.598 [2024-12-15 05:37:04.193035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.193069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.193341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.193374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.193686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.193719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.193976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.194019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.194318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.194351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.194604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.194636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.194927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.194960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.195244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.195278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.195493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.195526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.195793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.195825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.196038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.196072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.196349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.196382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.196595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.196629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.196910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.196942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.197110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.197143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.197351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.197383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.197683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.197716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.198016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.198051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.198272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.198306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.198578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.198611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.198902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.198935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.199392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.199431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.199739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.199775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.200057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.200094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.200290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.200323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.200569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.200603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.200791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.200824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.201052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.201086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.201217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.201249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.201467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.201501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.201791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.201825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.201970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.599 [2024-12-15 05:37:04.202012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.599 qpair failed and we were unable to recover it. 00:36:50.599 [2024-12-15 05:37:04.202157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.202192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.202329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.202362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.202653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.202685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.202957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.203002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.203245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.203277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.203535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.203567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.203828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.203863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.204147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.204183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.204460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.204494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.204787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.204821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.205086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.205121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.205321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.205353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.205552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.205586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.205863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.205896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.206076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.206110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.206385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.206419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.206617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.206650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.206953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.207002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.207284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.207319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.207600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.207633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.207920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.207960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.208246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.208283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.208496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.208530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.208798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.208832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.209099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.209143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.209443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.209475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.209770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.209805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.210074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.210117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.210384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.210416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.210701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.210740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.211017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.211054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.211332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.211366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.211651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.211686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.211965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.212012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.212296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.212339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.212599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.212633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.212824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.600 [2024-12-15 05:37:04.212859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.600 qpair failed and we were unable to recover it. 00:36:50.600 [2024-12-15 05:37:04.213116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.213155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.213338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.213372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.213581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.213614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.213765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.213799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.214062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.214100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.214355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.214388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.214657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.214698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.214989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.215040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.215221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.215253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.215391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.215423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.215637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.215674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.215935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.215972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.216287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.216324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.216525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.216559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.216815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.216851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.217066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.217104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.217357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.217391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.217688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.217723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.218017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.218052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.218341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.218375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.218650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.218683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.218887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.218919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.219131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.219165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.219463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.219497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.219737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.219777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.220044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.220078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.220304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.220338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.220639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.220672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.220935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.220968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.221273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.221307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.221565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.221598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.221778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.221811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.222062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.222097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.222381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.222414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.222617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.601 [2024-12-15 05:37:04.222649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.601 qpair failed and we were unable to recover it. 00:36:50.601 [2024-12-15 05:37:04.222851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.222884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.223160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.223195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.223483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.223516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.223811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.223846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.224039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.224074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.224351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.224385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.224639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.224673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.224952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.224986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.225289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.225323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.225475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.225508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.225687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.225721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.225913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.225946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.226074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.226107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.226364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.226397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.226607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.226641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.226772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.226806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.227062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.227099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.227328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.227364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.227494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.227527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.227745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.227778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.227916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.227949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.228232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.228267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.228516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.228548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.228848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.228881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.229149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.229190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.229399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.229439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.229689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.229723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.230018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.230055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.230192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.230225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.602 [2024-12-15 05:37:04.230793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.602 [2024-12-15 05:37:04.230826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.602 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.231068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.231105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.231366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.231400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.231651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.231686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.231891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.231924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.232073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.232107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.232311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.232344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.232544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.232577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.232836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.232870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.233126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.233160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.233380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.233412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.233689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.233723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.234014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.234048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.234200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.234233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.234433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.234466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.234770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.234803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.235079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.235115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.235324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.888 [2024-12-15 05:37:04.235357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.888 qpair failed and we were unable to recover it. 00:36:50.888 [2024-12-15 05:37:04.235554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.235587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.235782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.235815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.236089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.236125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.236347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.236380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.236577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.236610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.236801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.236834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.237114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.237150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.237425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.237458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.237767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.237799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.238057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.238091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.238227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.238267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.238409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.238442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.238719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.238753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.238947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.238980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.239302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.239336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.239595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.239628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.239925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.239958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.240170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.240210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.240489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.240522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.240833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.240867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.241139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.241178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.241378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.241410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.241564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.241599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.241873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.241909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.242192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.242227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.242506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.242539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.242820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.242854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.243050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.243085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.243334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.243368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.243665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.243698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.243965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.244015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.244141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.244175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.244404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.244437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.244623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.244656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.889 [2024-12-15 05:37:04.244927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.889 [2024-12-15 05:37:04.244961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.889 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.245181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.245216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.245398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.245431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.245708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.245747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.246017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.246053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.246338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.246373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.246641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.246674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.246969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.247013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.247248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.247284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.247495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.247531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.247807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.247840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.248108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.248145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.248438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.248475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.248670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.248703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.248900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.248938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.249229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.249274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.249481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.249516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.249802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.249836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.250053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.250098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.250250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.250283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.250408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.250443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.250645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.250678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.250984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.251032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.251157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.251200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.251476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.251511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.251710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.251744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.251881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.251914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.252161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.252198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.252399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.252435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.252570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.252603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.252875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.252915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.253222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.253260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.253484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.253517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.253652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.253684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.253985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.254032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.254308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.254341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.890 qpair failed and we were unable to recover it. 00:36:50.890 [2024-12-15 05:37:04.254541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.890 [2024-12-15 05:37:04.254574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.254753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.254792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.255046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.255082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.255292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.255326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.255506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.255540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.255817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.255850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.256066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.256112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.256323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.256356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.256535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.256569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.256858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.256898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.257165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.257200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.257349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.257381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.257684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.257722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.258006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.258044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.258235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.258278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.258411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.258443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.258736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.258771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.259052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.259087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.259366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.259407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.259687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.259722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.260010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.260048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.260199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.260235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.260528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.260564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.260849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.260883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.261093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.261129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.261317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.261350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.261549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.261587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.261873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.261909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.262108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.262145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.262407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.262443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.262579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.262613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.262886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.262923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.263072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.263109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.263326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.263361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.263615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.891 [2024-12-15 05:37:04.263653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.891 qpair failed and we were unable to recover it. 00:36:50.891 [2024-12-15 05:37:04.263958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.264012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.264250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.264285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.264588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.264629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.264914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.264949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.265282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.265321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.265602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.265635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.265838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.265876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.266122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.266160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.266374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.266410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.266695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.266732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.267015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.267051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.267330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.267368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.267637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.267674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.267864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.267899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.268166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.268202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.268479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.268513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.268721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.268757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.269045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.269081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.269370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.269405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.269630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.269665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.269847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.269882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.270145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.270180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.270459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.270494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.270777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.270810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.271093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.271128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.271377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.271410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.271595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.271628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.271903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.271943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.272226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.272261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.272535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.272568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.272781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.272815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.273092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.273127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.273331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.273364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.273667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.273700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.273832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.273865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.274145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.274181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.274482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.274519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.274774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.274806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.892 [2024-12-15 05:37:04.275085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.892 [2024-12-15 05:37:04.275119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.892 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.275300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.275333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.275653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.275686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.275988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.276033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.276318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.276351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.276621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.276654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.276952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.276985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.277257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.277290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.277504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.277537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.277810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.277843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.278041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.278076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.278334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.278367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.278595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.278629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.278902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.278935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.279138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.279173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.279472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.279506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.279790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.279828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.280105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.280141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.280345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.280378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.280627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.280660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.280882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.280916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.281120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.281155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.281336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.281369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.281619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.281653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.281952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.281986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.282272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.282306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.282580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.282613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.282906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.282939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.283144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.283179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.283444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.283477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.283759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.283793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.284073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.284108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.284293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.284327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.893 [2024-12-15 05:37:04.284528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.893 [2024-12-15 05:37:04.284560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.893 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.284756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.284789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.285047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.285081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.285407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.285439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.285717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.285750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.285958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.286000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.286303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.286336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.286518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.286551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.286851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.286883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.287136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.287170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.287435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.287468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.287670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.287703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.287841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.287874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.288076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.288110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.288384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.288418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.288697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.288729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.289038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.289073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.289331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.289365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.289644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.289677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.289936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.289969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.290296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.290331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.290532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.290565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.290783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.290816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.290938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.290971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.291187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.291222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.291379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.291412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.291685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.291718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.291949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.291981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.292291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.292325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.292574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.292608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.292860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.292893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.293033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.293068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.293263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.293296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.293408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.293440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.293570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.293603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.293797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.293830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.894 [2024-12-15 05:37:04.294153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.894 [2024-12-15 05:37:04.294187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.894 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.294441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.294474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.294608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.294642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.294918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.294950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.295162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.295196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.295498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.295533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.295787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.295821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.296046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.296081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.296204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.296237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.296455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.296488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.296693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.296726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.297029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.297064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.297269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.297302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.297578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.297611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.297756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.297789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.298020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.298061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.298267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.298300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.298572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.298605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.298786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.298818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.299014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.299048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.299330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.299363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.299585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.299618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.299812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.299845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.300099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.300134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.300411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.300443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.300640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.300672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.300881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.300915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.301168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.301202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.301456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.301488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.301728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.301762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.301947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.301980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.302285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.302323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.302464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.302495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.302692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.302725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.302946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.302979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.303178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.303210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.303411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.303445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.303722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.303755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.895 [2024-12-15 05:37:04.303932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.895 [2024-12-15 05:37:04.303965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.895 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.304244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.304277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.304544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.304577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.304798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.304830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.305107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.305148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.305336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.305369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.305621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.305652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.305924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.305957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.306223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.306257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.306385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.306417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.306561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.306593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.306806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.306839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.307044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.307079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.307278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.307310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.307579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.307612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.307812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.307850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.308055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.308089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.308363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.308400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.308612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.308645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.308901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.308934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.309203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.309239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.309533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.309565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.309848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.309881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.310083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.310117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.310257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.310290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.310415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.310447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.310734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.310766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.310950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.310983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.311181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.311214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.311409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.311441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.311641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.311675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.311861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.311900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.312185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.312219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.312415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.312447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.312655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.312688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.312893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.312925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.313197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.313231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.896 [2024-12-15 05:37:04.313493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.896 [2024-12-15 05:37:04.313527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.896 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.313748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.313780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.314054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.314089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.314377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.314409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.314668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.314701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.314976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.315021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.315240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.315271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.315454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.315486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.315772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.315807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.316122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.316156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.316455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.316487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.316711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.316745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.316955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.316988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.317227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.317260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.317441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.317473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.317749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.317781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.317976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.318019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.318297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.318329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.318603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.318636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.318929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.318962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.319234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.319269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.319460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.319492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.319777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.319810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.320057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.320092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.320291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.320324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.320618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.320652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.320924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.320957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.321232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.321267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.321478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.321511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.321803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.321835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.321977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.322038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.322294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.322327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.322623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.322656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.322862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.322896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.323167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.323201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.323498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.323532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.323824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.323858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.324138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.897 [2024-12-15 05:37:04.324172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.897 qpair failed and we were unable to recover it. 00:36:50.897 [2024-12-15 05:37:04.324458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.324490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.324629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.324663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.324852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.324883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.325094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.325128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.325287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.325321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.325513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.325545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.325761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.325793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.326066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.326101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.326380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.326411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.326696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.326728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.326909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.326942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.327116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.327151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.327359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.327392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.327694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.327728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.327919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.327951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.328264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.328298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.328557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.328590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.328709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.328741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.328971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.329017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.329132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.329164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.329418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.329451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.329654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.329687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.329962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.330005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.330284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.330318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.330540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.330579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.330759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.330791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.330988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.331034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.331305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.331338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.331540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.331572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.331787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.331819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.332073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.332110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.332367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.332400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.332688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.332720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.332939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.332972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.898 qpair failed and we were unable to recover it. 00:36:50.898 [2024-12-15 05:37:04.333165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.898 [2024-12-15 05:37:04.333198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.333383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.333416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.333558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.333592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.333784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.333816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.334170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.334205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.334494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.334527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.334800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.334832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.335101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.335134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.335264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.335298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.335501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.335533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.335792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.335824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.336018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.336053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.336333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.336365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.336622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.336655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.336915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.336947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.337252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.337286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.337429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.337463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.337661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.337700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.337978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.338021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.338215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.338247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.338547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.338580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.338760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.338792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.339090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.339123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.339244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.339277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.339458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.339490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.339698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.339731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.340020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.340054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.340254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.340288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.340509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.340540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.340729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.340761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.340942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.340975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.341252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.341286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.341394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.341426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.341570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.341603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.341839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.341875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.342075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.342109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.342362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.342395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.342574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.342607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.342792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.899 [2024-12-15 05:37:04.342824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.899 qpair failed and we were unable to recover it. 00:36:50.899 [2024-12-15 05:37:04.343043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.343076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.343284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.343317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.343550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.343585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.343789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.343822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.344030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.344065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.344340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.344373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.344563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.344597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.344857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.344890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.345167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.345201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.345393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.345426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.345571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.345604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.345789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.345822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.346035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.346070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.346325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.346358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.346543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.346576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.346768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.346800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.347072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.347107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.347357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.347391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.347599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.347632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.347904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.347938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.348090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.348126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.348402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.348438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.348717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.348750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.348979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.349025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.349293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.349329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.349550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.349584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.349786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.349826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.350086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.350122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.350401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.350437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.350717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.350749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.351054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.351089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.351348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.351382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.351573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.351606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.351790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.351824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.352024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.352057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.352339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.352373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.900 [2024-12-15 05:37:04.352556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.900 [2024-12-15 05:37:04.352591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.900 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.352716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.352749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.352932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.352965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.353187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.353221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.353498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.353530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.353759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.353792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.354026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.354060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.354311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.354344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.354655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.354688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.354887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.354920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.355184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.355225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.355374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.355408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.355556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.355589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.355782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.355814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.356008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.356042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.356307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.356341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.356638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.356671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.356900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.356934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.357268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.357303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.357570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.357604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.357897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.357931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.358181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.358215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.358471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.358504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.358724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.358760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.359017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.359056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.359191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.359224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.359433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.359466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.359741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.359773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.359924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.359957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.360167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.360201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.360452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.360485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.360785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.360818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.361044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.361078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.361333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.361366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.361579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.361612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.361886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.361920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.362199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.362236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.362370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.362409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.362612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.362645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.901 [2024-12-15 05:37:04.362914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.901 [2024-12-15 05:37:04.362948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.901 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.363228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.363264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.363549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.363582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.363831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.363865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.364145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.364180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.364460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.364493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.364774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.364807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.365015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.365051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.365329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.365363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.365568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.365602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.365785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.365818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.366013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.366047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.366327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.366362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.366615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.366650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.366782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.366815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.367127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.367161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.367312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.367345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.367533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.367566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.367692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.367726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.368010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.368045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.368246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.368279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.368486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.368520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.368770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.368803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.369022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.369057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.369258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.369291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.369541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.369584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.369848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.369881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.370109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.370144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.370299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.370332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.370580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.370614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.370806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.370839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.371142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.371176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.371461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.371495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.371742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.902 [2024-12-15 05:37:04.371775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.902 qpair failed and we were unable to recover it. 00:36:50.902 [2024-12-15 05:37:04.372051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.372086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.372345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.372378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.372594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.372628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.372885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.372918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.373221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.373257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.373516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.373549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.373807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.373840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.374037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.374073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.374295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.374329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.374528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.374561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.374813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.374855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.375024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.375072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.375348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.375395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.375676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.375729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.376056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.376106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.376326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.376375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.376635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.376673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.376950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.376985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.377301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.377335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.377605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.377637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.377855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.377887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.378084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.378120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.378351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.378383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.378514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.378547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.378844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.378877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.379060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.379094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.379366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.379398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.379601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.379636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.379839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.379871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.380129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.380164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.380291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.380323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.380544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.380577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.380785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.380825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.381015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.381049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.381302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.381334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.381638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.381672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.381887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.381920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.382197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.382233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.903 qpair failed and we were unable to recover it. 00:36:50.903 [2024-12-15 05:37:04.382517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.903 [2024-12-15 05:37:04.382551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.382831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.382862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.383111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.383145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.383346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.383379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.383586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.383619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.383837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.383871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.384068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.384103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.384300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.384334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.384594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.384627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.384760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.384794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.385069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.385103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.385363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.385396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.385625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.385658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.385930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.385963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.386175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.386209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.386449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.386482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.386697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.386730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.386878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.386911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.387092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.387126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.387352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.387387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.387595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.387628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.387925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.387965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.388181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.388219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.388439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.388472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.388667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.388700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.388846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.388880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.389141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.389175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.389380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.389411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.389677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.389709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.389935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.389968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.390172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.390205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.390504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.390537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.390678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.390711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.390913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.390946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.391157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.391191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.391334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.391368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.391629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.391661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.391789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.391822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.392022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.904 [2024-12-15 05:37:04.392057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.904 qpair failed and we were unable to recover it. 00:36:50.904 [2024-12-15 05:37:04.392204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.392236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.392378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.392410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.392610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.392644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.392843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.392875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.393085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.393119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.393398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.393432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.393625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.393657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.393856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.393888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.394089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.394128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.394272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.394309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.394591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.394625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.394767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.394800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.395079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.395114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.395240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.395274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.395417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.395450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.395632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.395665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.395775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.395807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.396018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.396052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.396256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.396290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.396428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.396460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.396599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.396632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.396882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.396916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.397167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.397202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.397486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.397520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.397712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.397745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.398014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.398049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.398236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.398270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.398459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.398492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.398679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.398711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.398906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.398940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.399096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.399130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.399246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.399286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.399593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.399626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.399758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.399791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.400009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.400043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.400263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.400296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.400442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.400475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.400756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.905 [2024-12-15 05:37:04.400790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.905 qpair failed and we were unable to recover it. 00:36:50.905 [2024-12-15 05:37:04.400922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.400954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.401094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.401129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.401245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.401278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.401413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.401447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.401644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.401678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.401883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.401915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.402115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.402157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.402344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.402378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.402578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.402611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.402860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.402893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.403109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.403144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.403260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.403291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.403421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.403454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.403667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.403701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.403843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.403877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.404075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.404109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.404363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.404397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.404649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.404682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.404815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.404848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.405033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.405068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.405257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.405290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.405471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.405504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.405622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.405655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.405883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.405915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.406104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.406138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.406269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.406302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.406492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.406525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.406799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.406835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.407032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.407066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.407320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.407352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.407493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.407526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.407743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.407775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.408029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.408063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.408258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.408292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.408516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.408548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.408735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.906 [2024-12-15 05:37:04.408767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.906 qpair failed and we were unable to recover it. 00:36:50.906 [2024-12-15 05:37:04.408956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.408990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.409190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.409222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.409413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.409445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.409638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.409677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.409814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.409847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.410116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.410151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.410332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.410366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.410638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.410670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.410966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.411007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.411133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.411167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.411438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.411469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.411598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.411630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.411769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.411803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.412013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.412066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.412272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.412305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.412515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.412548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.412678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.412710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.413015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.413051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.413343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.413377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.413520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.413552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.413686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.413720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.413947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.413978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.414215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.414248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.414381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.414415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.414603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.414635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.414838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.414870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.415125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.415160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.415358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.415391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.415521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.415554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.415676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.415708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.415918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.415957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.416082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.416116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.416374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.907 [2024-12-15 05:37:04.416407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.907 qpair failed and we were unable to recover it. 00:36:50.907 [2024-12-15 05:37:04.416629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.416663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.416796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.416829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.416956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.416989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.417203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.417239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.417491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.417523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.417652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.417684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.417804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.417837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.418059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.418092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.418346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.418378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.418526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.418560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.418751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.418784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.418920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.418953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.419103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.419139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.419259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.419292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.419480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.419513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.419631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.419666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.419811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.419843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.419975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.420040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.420167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.420199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.420391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.420425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.420603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.420637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.420757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.420789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.420975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.421026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.421158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.421191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.421413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.421452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.421650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.421686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.421875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.421908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.422026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.422063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.422180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.422214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.422348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.422379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.422562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.422594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.422803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.422837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.423034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.423068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.423211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.423243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.423377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.423411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.423545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.423577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.423693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.423728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.423856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.423894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.424223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.908 [2024-12-15 05:37:04.424300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:50.908 qpair failed and we were unable to recover it. 00:36:50.908 [2024-12-15 05:37:04.424519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x214a5e0 is same with the state(6) to be set 00:36:50.909 [2024-12-15 05:37:04.424842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.424920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.425145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.425184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.425313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.425346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.425474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.425507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.425684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.425717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.425896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.425928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.426178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.426212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.426410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.426441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.426631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.426663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.426781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.426814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.426940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.426972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.427125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.427159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.427286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.427320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.427517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.427549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.427723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.427756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.428018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.428052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.428164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.428195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.428306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.428337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.428464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.428496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.428681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.428713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.428898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.428930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.429169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.429202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.429407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.429439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.429554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.429586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.429703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.429735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.429929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.429973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.430118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.430152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.430343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.430376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.430493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.430525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.430638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.430672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.430858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.430891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.431019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.431053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.431263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.431295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.431482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.431517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.431628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.431659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.431785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.431816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.432021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.909 [2024-12-15 05:37:04.432053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.909 qpair failed and we were unable to recover it. 00:36:50.909 [2024-12-15 05:37:04.432187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.432219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.432335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.432367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.432488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.432520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.432727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.432759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.432945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.432978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.433124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.433156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.433348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.433380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.433579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.433611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.433814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.433845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.434000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.434034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.434177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.434209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.434415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.434449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.434651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.434684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.434894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.434926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.435059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.435092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.435224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.435257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.435393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.435425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.435609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.435645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.435777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.435810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.436014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.436048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.436186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.436218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.436362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.436395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.436662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.436694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.436802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.436833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.436982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.437026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.437161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.437193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.437373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.437404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.437531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.437564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.437685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.437722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.437969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.438012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.438210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.438243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.438365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.438398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.438605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.438637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.438756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.438788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.439032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.439066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.439173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.439205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.439387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.439420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.439608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.439641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.910 qpair failed and we were unable to recover it. 00:36:50.910 [2024-12-15 05:37:04.439827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.910 [2024-12-15 05:37:04.439859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.440051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.440085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.440209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.440241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.440454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.440487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.440618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.440650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.440790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.440821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.440948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.440980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.441094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.441125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.441300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.441332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.441531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.441563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.441762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.441794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.441915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.441948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.442209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.442242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.442451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.442482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.442659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.442691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.442942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.442975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.443181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.443213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.443411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.443443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.443576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.443608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.443798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.443830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.443950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.443981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.444175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.444208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.444320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.444355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.444482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.444514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.444635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.444666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.444790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.444821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.445100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.445135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.445315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.445347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.445567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.445598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.445726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.445758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.445933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.445970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.446104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.446138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.446254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.911 [2024-12-15 05:37:04.446285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.911 qpair failed and we were unable to recover it. 00:36:50.911 [2024-12-15 05:37:04.446397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.446428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.446671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.446702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.446810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.446841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.447034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.447069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.447247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.447280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.447461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.447492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.447600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.447629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.447760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.447791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.447915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.447946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.448131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.448163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.448300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.448332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.448515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.448546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.448752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.448783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.448897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.448928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.449046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.449078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.449183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.449215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.449317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.449349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.449490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.449521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.449629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.449660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.449837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.449868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.450144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.450177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.450301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.450332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.450446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.450477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.450582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.450614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.450797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.450829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.450951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.450982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.451184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.451216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.451336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.451368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.451485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.451516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.451767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.451799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.451924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.451955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.452144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.452178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.452287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.452318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.452449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.452480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.452602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.452633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.452769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.452800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.452930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.452961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.453191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.453231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.912 qpair failed and we were unable to recover it. 00:36:50.912 [2024-12-15 05:37:04.453421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.912 [2024-12-15 05:37:04.453452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.453703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.453733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.453863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.453894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.454016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.454050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.454156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.454187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.454300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.454331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.454517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.454549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.454676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.454708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.454915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.454946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.455218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.455251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.455376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.455408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.455556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.455586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.455709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.455741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.455872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.455904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.456083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.456116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.456291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.456322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.456506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.456537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.456671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.456702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.456886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.456917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.457044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.457078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.457263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.457294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.457422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.457452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.457625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.457657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.457848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.457879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.458059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.458091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.458216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.458247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.458367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.458399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.458515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.458546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.458751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.458783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.458915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.458947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.459087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.459120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.459239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.459270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.459382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.459413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.459603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.459636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.459756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.459787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.459962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.913 [2024-12-15 05:37:04.460005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.913 qpair failed and we were unable to recover it. 00:36:50.913 [2024-12-15 05:37:04.460228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.460260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.460432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.460464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.460680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.460710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.460834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.460871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.460978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.461021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.461155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.461186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.461315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.461348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.461530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.461562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.461738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.461768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.461963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.462009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.462114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.462146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.462267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.462297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.462409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.462441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.462549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.462581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.462702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.462733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.462918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.462950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.463072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.463106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.463310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.463341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.463456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.463488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.463608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.463641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.463746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.463777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.463893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.463925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.464070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.464104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.464218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.464249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.464355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.464387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.464499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.464529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.464673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.464705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.464879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.464929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.465190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.465223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.465336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.465367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.465547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.465621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.465758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.465795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.465986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.466038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.466153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.466186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.466383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.466417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.466538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.466571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.466747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.914 [2024-12-15 05:37:04.466779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.914 qpair failed and we were unable to recover it. 00:36:50.914 [2024-12-15 05:37:04.466892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.466925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.467064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.467099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.467230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.467263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.467486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.467524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.467651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.467683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.467791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.467825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.467927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.467959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.468193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.468227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.468402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.468434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.468555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.468587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.468705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.468738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.468857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.468891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.469072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.469112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.469228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.469262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.469381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.469413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.469616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.469650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.469828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.469860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.470133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.470168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.470299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.470332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.470438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.470470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.470618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.470654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.470855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.470887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.471075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.471108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.471289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.471320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.471437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.471470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.471602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.471633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.471737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.471769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.471946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.471978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.472112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.472144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.472269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.472301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.472422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.472454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.472626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.472658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.472833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.472865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.472976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.473033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.473154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.473187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.473297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.473328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.473457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.473488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.473669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.473700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.915 [2024-12-15 05:37:04.473818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.915 [2024-12-15 05:37:04.473850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.915 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.474067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.474100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.474210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.474242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.474422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.474454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.474560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.474592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.474766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.474798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.474973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.475018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.475145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.475176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.475296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.475327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.475512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.475545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.475665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.475697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.475819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.475851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.476039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.476073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.476188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.476218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.476392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.476424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.476538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.476570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.476691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.476723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.476907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.476939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.477125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.477158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.477262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.477293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.477475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.477506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.477620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.477652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.477769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.477805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.477991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.478037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.478221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.478254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.478357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.478387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.478654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.478686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.478880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.478913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.479088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.479121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.479314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.479346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.479474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.479507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.479637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.479668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.479771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.916 [2024-12-15 05:37:04.479803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.916 qpair failed and we were unable to recover it. 00:36:50.916 [2024-12-15 05:37:04.479981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.480026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.480144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.480175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.480366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.480397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.480575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.480607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.480737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.480768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.480870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.480901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.481076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.481108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.481282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.481313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.481415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.481446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.481622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.481653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.481859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.481891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.482036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.482069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.482205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.482236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.482359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.482390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.482505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.482537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.482712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.482743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.482875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.482907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.483049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.483083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.483198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.483229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.483413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.483445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.483688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.483721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.483839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.483870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.483988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.484032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.484225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.484258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.484437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.484468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.484591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.484622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.484809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.484842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.484954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.484985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.485127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.485161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.485301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.485338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.485527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.485559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.485677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.485709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.485822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.485853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.485967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.486012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.486123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.486155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.486274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.486305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.486417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.917 [2024-12-15 05:37:04.486449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.917 qpair failed and we were unable to recover it. 00:36:50.917 [2024-12-15 05:37:04.486555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.486587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.486775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.486805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.486976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.487037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.487154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.487185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.487391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.487422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.487542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.487573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.487766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.487797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.487971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.488015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.488149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.488181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.488299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.488329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.488455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.488486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.488595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.488627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.488798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.488829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.489032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.489066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.489254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.489287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.489402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.489432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.489553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.489585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.489724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.489756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.489869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.489901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.490085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.490120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.490247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.490279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.490391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.490423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.490665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.490697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.490822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.490853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.490964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.491007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.491181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.491212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.491327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.491358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.491544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.491576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.491819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.491850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.491974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.492044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.492231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.492262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.492369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.492401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.492611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.492649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.492785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.492817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.492943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.492975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.493116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.493149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.918 qpair failed and we were unable to recover it. 00:36:50.918 [2024-12-15 05:37:04.493334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.918 [2024-12-15 05:37:04.493365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.493506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.493538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.493711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.493741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.493847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.493879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.494017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.494051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.494190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.494221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.494342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.494374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.494491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.494522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.494649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.494682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.494801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.494833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.494953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.494984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.495177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.495210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.495346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.495377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.495481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.495513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.495640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.495672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.495780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.495812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.495922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.495953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.496234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.496267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.496460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.496491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.496619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.496650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.496770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.496802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.496907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.496937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.497135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.497167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.497343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.497375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.497548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.497579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.497754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.497786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.498026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.498060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.498301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.498332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.498465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.498497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.498628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.498660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.498780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.498811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.499054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.499087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.499215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.499248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.499479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.499510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.499626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.499658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.499789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.499821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.499921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.499958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.500160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.500193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.919 [2024-12-15 05:37:04.500368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.919 [2024-12-15 05:37:04.500400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.919 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.500505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.500536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.500721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.500753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.501034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.501068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.501176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.501207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.501313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.501345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.501540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.501573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.501697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.501728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.501855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.501887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.502079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.502114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.502229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.502260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.502434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.502466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.502668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.502700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.502939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.502970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.503159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.503191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.503433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.503466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.503671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.503703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.503819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.503850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.504040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.504073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.504177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.504208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.504327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.504359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.504553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.504584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.504761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.504792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.504917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.504949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.505200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.505233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.505429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.505461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.505643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.505674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.505777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.505807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.506014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.506047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.506217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.506249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.506437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.506468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.506582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.506615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.506827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.506858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.506977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.507018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.507137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.507168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.507273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.507305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.507421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.507452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.507649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.920 [2024-12-15 05:37:04.507680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.920 qpair failed and we were unable to recover it. 00:36:50.920 [2024-12-15 05:37:04.507875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.507914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.508035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.508068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.508185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.508216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.508336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.508368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.508471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.508501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.508684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.508716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.508921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.508953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.509091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.509124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.509235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.509267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.509445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.509476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.509590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.509621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.509788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.509819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.509944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.509976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.510105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.510138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.510329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.510361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.510468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.510499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.510612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.510643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.510812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.510843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.510962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.511006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.511191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.511223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.511340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.511372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.511483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.511515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.511697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.511728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.511909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.511941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.512132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.512166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.512270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.512300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.512415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.512446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.512573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.512605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.512733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.512764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.512889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.512920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.513187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.513222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.513530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.513561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.513800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.513833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.514016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.921 [2024-12-15 05:37:04.514049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.921 qpair failed and we were unable to recover it. 00:36:50.921 [2024-12-15 05:37:04.514160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.514191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.514454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.514487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.514668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.514699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.514815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.514847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.515028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.515060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.515303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.515334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.515449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.515487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.515615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.515646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.515825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.515856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.515974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.516015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.516133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.516165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.516297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.516328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.516439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.516470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.516586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.516618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.516730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.516761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.516890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.516921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.517027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.517061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.517232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.517263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.517377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.517408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.517583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.517614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.517738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.517769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.517955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.517988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.518120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.518152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.518267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.518298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.518403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.518435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.518613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.518644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.518766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.518797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.518922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.518953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.519079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.519113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.519235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.519266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.519469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.519501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.519613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.519645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.519835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.519866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.519985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.520027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.520263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.520295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.520474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.520504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.520744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.520776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.922 [2024-12-15 05:37:04.521022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.922 [2024-12-15 05:37:04.521056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.922 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.521162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.521193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.521303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.521334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.521537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.521570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.521783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.521813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.522014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.522047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.522165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.522198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.522300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.522331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.522444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.522476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.522649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.522686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.522793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.522824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.523016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.523049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.523166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.523198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.523335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.523366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.523604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.523634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.523748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.523779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.523952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.523984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.524177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.524210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.524331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.524362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.524464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.524496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.524620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.524651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.524765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.524796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.524924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.524955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.525191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.525263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.525462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.525500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.525612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.525649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.525843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.525879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.526097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.526138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.526329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.526362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.526623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.526656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.526787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.526820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.526940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.526972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.527117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.527151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.527254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.527287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.527554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.527586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.923 [2024-12-15 05:37:04.527789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.923 [2024-12-15 05:37:04.527823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.923 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.527941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.527984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.528118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.528151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.528341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.528373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.528482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.528513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.528686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.528717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.528825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.528857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.529045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.529079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.529192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.529225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.529355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.529386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.529501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.529532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.529656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.529687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.529815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.529847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.529965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.530004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.530117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.530148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.530326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.530359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.530533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.530564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.530808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.530839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.531014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.531048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.531170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.531202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.531315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.531347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.531543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.531575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.531762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.531793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.531990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.532032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.532212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.532245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.532350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.532382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.532508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.532539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.532655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.532687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.532804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.532841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.533100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.533134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.533314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.533347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.533608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.533643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.533834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.533868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.534000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.534034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.534214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.534247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.534431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.534465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.534641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.534673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.924 [2024-12-15 05:37:04.534866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.924 [2024-12-15 05:37:04.534898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.924 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.535134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.535169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.535442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.535474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.535578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.535611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.535848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.535882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.536014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.536048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.536156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.536188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.536314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.536347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.536466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.536499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.536646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.536680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.536853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.536885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.537010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.537042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.537159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.537192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.537412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.537446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.537559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.537591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.537718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.537750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.537874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.537908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.538043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.538082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.538215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.538253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.538374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.538408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.538645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.538686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.538825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.538857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.539005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.539041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.539271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.539304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.539427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.539460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.539572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.539604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.539782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.539818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.540017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.540060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.540236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.540267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.540384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.540421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.540554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.540587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.540701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.540735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.540860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.540897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.541100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.541133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.925 [2024-12-15 05:37:04.541304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.925 [2024-12-15 05:37:04.541336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.925 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.541448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.541480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.541616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.541647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.541757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.541787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.542023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.542056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.542184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.542217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.542332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.542363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.542466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.542498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.542608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.542639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.542812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.542843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.542969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.543013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.543141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.543178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.543357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.543388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.543512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.543543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.543829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.543860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.543982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.544026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.544150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.544182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.544386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.544418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.544598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.544629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.544818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.544850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.545021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.545055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.545181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.545212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.545410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.545442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.545629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.545662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.545797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.545828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.545984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.546028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.546222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.546254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.546385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.546417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.546533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.546565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.546696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.546728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.546852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.546883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.547066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.547099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.547215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.547246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.547432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.547463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.547635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.547667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.547781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.547812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.547918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.547950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.548084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.926 [2024-12-15 05:37:04.548117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.926 qpair failed and we were unable to recover it. 00:36:50.926 [2024-12-15 05:37:04.548268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.927 [2024-12-15 05:37:04.548301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.927 qpair failed and we were unable to recover it. 00:36:50.927 [2024-12-15 05:37:04.548412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.927 [2024-12-15 05:37:04.548444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.927 qpair failed and we were unable to recover it. 00:36:50.927 [2024-12-15 05:37:04.548559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.927 [2024-12-15 05:37:04.548590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.927 qpair failed and we were unable to recover it. 00:36:50.927 [2024-12-15 05:37:04.548698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.927 [2024-12-15 05:37:04.548729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.927 qpair failed and we were unable to recover it. 00:36:50.927 [2024-12-15 05:37:04.549013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.927 [2024-12-15 05:37:04.549046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.927 qpair failed and we were unable to recover it. 00:36:50.927 [2024-12-15 05:37:04.549287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.927 [2024-12-15 05:37:04.549320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.927 qpair failed and we were unable to recover it. 00:36:50.927 [2024-12-15 05:37:04.549508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.927 [2024-12-15 05:37:04.549540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.927 qpair failed and we were unable to recover it. 00:36:50.927 [2024-12-15 05:37:04.549745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.927 [2024-12-15 05:37:04.549776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.927 qpair failed and we were unable to recover it. 00:36:50.927 [2024-12-15 05:37:04.549902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.927 [2024-12-15 05:37:04.549934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.927 qpair failed and we were unable to recover it. 00:36:50.927 [2024-12-15 05:37:04.550150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.927 [2024-12-15 05:37:04.550183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.927 qpair failed and we were unable to recover it. 00:36:50.927 [2024-12-15 05:37:04.550302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.927 [2024-12-15 05:37:04.550334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.927 qpair failed and we were unable to recover it. 00:36:50.927 [2024-12-15 05:37:04.550507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.927 [2024-12-15 05:37:04.550538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.927 qpair failed and we were unable to recover it. 00:36:50.927 [2024-12-15 05:37:04.550650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:50.927 [2024-12-15 05:37:04.550681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:50.927 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.550786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.550823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.550990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.551035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.551255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.551287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.551471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.551503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.551745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.551776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.551892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.551924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.552056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.552089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.552206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.552238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.552411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.552443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.552546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.552576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.552691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.552723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.552849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.552880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.552989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.553030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.553172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.553203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.553379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.553411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.553581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.553612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.553729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.553759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.553866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.553898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.554024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.554056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.554159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.554191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.554373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.554404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.554577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.554609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.214 [2024-12-15 05:37:04.554797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.214 [2024-12-15 05:37:04.554827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.214 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.554940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.554971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.555091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.555124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.555240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.555272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.555442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.555473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.555584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.555617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.555731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.555763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.555879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.555911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.556047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.556081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.556188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.556219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.556333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.556365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.556476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.556507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.556687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.556721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.556843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.556878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.556979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.557037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.557168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.557200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.557315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.557347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.557542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.557573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.557689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.557726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.557859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.557891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.558006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.558039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.558244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.558283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.558403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.558434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.558555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.558587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.558697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.558729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.558854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.558885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.559011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.559044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.559223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.559256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.559426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.559457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.559570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.559601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.559719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.559751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.559939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.559969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.560174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.560208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.560315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.560347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.560463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.560494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.560612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.560643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.560882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.215 [2024-12-15 05:37:04.560916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.215 qpair failed and we were unable to recover it. 00:36:51.215 [2024-12-15 05:37:04.561042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.561076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.561249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.561280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.561381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.561413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.561524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.561554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.561662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.561693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.561810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.561841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.561953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.561984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.562184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.562217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.562445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.562517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.562711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.562747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.563003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.563039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.563230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.563262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.563437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.563470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.563586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.563618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.563735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.563767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.563953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.563985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.564172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.564212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.564392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.564423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.564555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.564587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.564781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.564814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.564954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.564985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.565235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.565269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.565407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.565440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.565557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.565590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.565780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.565812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.565923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.565956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.566090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.566124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.566242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.566274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.566454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.566486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.566599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.566631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.566801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.566833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.567014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.567049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.567182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.567214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.567397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.216 [2024-12-15 05:37:04.567429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.216 qpair failed and we were unable to recover it. 00:36:51.216 [2024-12-15 05:37:04.567625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.567657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.567844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.567882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.568004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.568037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.568210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.568242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.568454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.568486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.568674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.568706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.568898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.568930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.569183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.569216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.569386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.569419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.569543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.569575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.569758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.569790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.570029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.570063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.570237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.570269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.570373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.570405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.570543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.570576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.570769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.570801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.570928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.570960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.571230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.571264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.571441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.571473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.571603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.571635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.571810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.571842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.571948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.571980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.572106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.572139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.572319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.572351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.572463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.572495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.572667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.572700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.572886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.572918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.573024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.573059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.573166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.573203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.573320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.573352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.573485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.573517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.573625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.573657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.573846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.573879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.574011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.574044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.574164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.574196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.574370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.574403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.574581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.574613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.574749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.217 [2024-12-15 05:37:04.574781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.217 qpair failed and we were unable to recover it. 00:36:51.217 [2024-12-15 05:37:04.574960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.575000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.575208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.575240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.575360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.575392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.575585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.575617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.575760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.575800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.575981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.576023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.576148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.576180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.576349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.576381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.576488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.576519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.576710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.576743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.576981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.577023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.577148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.577180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.577284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.577316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.577583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.577615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.577857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.577890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.578002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.578035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.578145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.578178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.578304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.578341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.578447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.578478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.578580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.578612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.578809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.578842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.578968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.579018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.579155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.579188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.579317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.579350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.579519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.579552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.579656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.579687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.579874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.579906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.580033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.580068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.580190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.580222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.580393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.580426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.580603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.580635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.580763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.580797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.580910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.580943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.581077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.581109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.581229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.581261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.581496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.581528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.581639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.581671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.581783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.581814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.581984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.218 [2024-12-15 05:37:04.582024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.218 qpair failed and we were unable to recover it. 00:36:51.218 [2024-12-15 05:37:04.582140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.582172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.582360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.582392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.582563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.582596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.582771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.582803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.582913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.582945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.583191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.583224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.583341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.583372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.583511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.583543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.583736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.583767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.583938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.583969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.584184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.584217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.584337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.584369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.584473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.584505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.584614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.584646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.584756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.584788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.584907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.584938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.585126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.585159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.585344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.585378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.585609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.585640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.585771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.585808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.585981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.586024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.586227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.586258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.586381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.586413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.586595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.586628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.586811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.586843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.587018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.587052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.587222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.587254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.587374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.587406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.587521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.587553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.587676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.587708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.587832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.587865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.588046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.588079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.588183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.588214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.219 [2024-12-15 05:37:04.588460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.219 [2024-12-15 05:37:04.588493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.219 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.588609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.588641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.588746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.588778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.588899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.588931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.589173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.589206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.589322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.589355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.589536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.589568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.589751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.589782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.589917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.589949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.590155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.590188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.590427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.590458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.590641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.590673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.590843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.590876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.591056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.591095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.591206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.591241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.591485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.591518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.591757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.591794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.591905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.591937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.592126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.592160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.592339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.592372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.592558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.592590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.592696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.592728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.592900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.592933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.593182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.593216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.593340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.593373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.593559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.593592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.593705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.593738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.593925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.593959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.594077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.594110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.594228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.594262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.594452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.594485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.594671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.594703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.594909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.594942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.595057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.595090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.595223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.595255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.595371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.595404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.595593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.595625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.595754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.595787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.595922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.220 [2024-12-15 05:37:04.595955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.220 qpair failed and we were unable to recover it. 00:36:51.220 [2024-12-15 05:37:04.596144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.596178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.596301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.596339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.596456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.596489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.596612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.596643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.596758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.596790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.596966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.597021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.597156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.597188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.597309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.597341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.597449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.597481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.597614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.597646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.597754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.597787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.597889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.597922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.598093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.598127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.598233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.598266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.598372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.598405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.598516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.598548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.598656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.598688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.598864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.598898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.599077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.599111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.599229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.599261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.599453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.599487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.599690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.599723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.599857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.599889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.600089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.600124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.600318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.600350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.600460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.600492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.600694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.600728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.601001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.601034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.601158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.601191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.601308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.601339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.601520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.601553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.601736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.601768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.601876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.601915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.602020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.602060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.602199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.602231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.602479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.602511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.602703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.221 [2024-12-15 05:37:04.602735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.221 qpair failed and we were unable to recover it. 00:36:51.221 [2024-12-15 05:37:04.602843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.602876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.603068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.603101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.603371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.603404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.603540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.603573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.603763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.603795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.603931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.603963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.604100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.604133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.604238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.604270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.604393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.604425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.604540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.604573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.604779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.604811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.604924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.604956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.605161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.605194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.605315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.605347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.605586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.605619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.605802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.605834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.605945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.605977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.606103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.606136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.606268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.606300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.606504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.606537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.606672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.606704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.606954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.606987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.607178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.607211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.607504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.607536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.607709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.607741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.607984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.608026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.608153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.608185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.608307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.608339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.608459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.608491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.608602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.608635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.608738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.608771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.608946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.608979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.609244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.609282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.609406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.609438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.609615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.609648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.609769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.609801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.222 [2024-12-15 05:37:04.609913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.222 [2024-12-15 05:37:04.609946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.222 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.610190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.610223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.610366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.610398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.610531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.610563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.610806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.610841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.610961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.611001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.611273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.611306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.611486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.611518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.611631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.611663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.611773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.611805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.611986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.612029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.612133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.612165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.612287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.612318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.612514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.612546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.612724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.612756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.613041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.613075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.613274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.613307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.613492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.613524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.613787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.613819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.614003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.614037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.614218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.614248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.614419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.614451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.614618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.614651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.614834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.614878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.615062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.615096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.615236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.615268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.615455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.615486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.615729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.615761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.615942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.615975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.616209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.616241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.616352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.616383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.616565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.616596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.616726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.223 [2024-12-15 05:37:04.616757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.223 qpair failed and we were unable to recover it. 00:36:51.223 [2024-12-15 05:37:04.616882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.616913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.617035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.617068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.617255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.617286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.617471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.617503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.617680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.617713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.617889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.617920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.618043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.618075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.618289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.618320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.618435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.618466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.618659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.618691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.618882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.618913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.619095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.619129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.619248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.619280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.619466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.619497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.619675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.619706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.619883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.619916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.620056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.620089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.620215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.620247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.620367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.620398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.620502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.620533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.620651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.620683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.620874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.620906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.621034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.621068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.621190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.621222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.621453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.621485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.621598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.621630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.621807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.621840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.622018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.622052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.622225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.622257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.622375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.622407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.622514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.622546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.622774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.622846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.623048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.623087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.623262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.623294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.623404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.623437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.623544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.623576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.623697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.623729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.623968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.224 [2024-12-15 05:37:04.624012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.224 qpair failed and we were unable to recover it. 00:36:51.224 [2024-12-15 05:37:04.624123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.624155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.624380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.624412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.624580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.624611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.624822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.624854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.624970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.625009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.625124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.625156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.625279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.625320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.625447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.625478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.625583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.625615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.625797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.625829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.626030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.626063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.626184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.626215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.626394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.626426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.626534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.626565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.626746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.626778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.626960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.627019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.627192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.627223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.627339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.627371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.627483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.627515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.627687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.627719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.627845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.627877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.628046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.628079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.628271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.628303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.628482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.628514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.628648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.628680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.628816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.628848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.628954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.628985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.629110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.629141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.629256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.629287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.629397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.629429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.629601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.629631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.629754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.629786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.629905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.629936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.630112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.630184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.630396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.630432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.630541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.225 [2024-12-15 05:37:04.630572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.225 qpair failed and we were unable to recover it. 00:36:51.225 [2024-12-15 05:37:04.630694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.630727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.630902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.630934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.631155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.631189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.631295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.631327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.631498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.631529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.631642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.631673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.631792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.631824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.632016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.632049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.632285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.632317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.632434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.632465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.632646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.632687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.632878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.632910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.633029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.633063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.633177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.633208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.633490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.633521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.633762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.633795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.633895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.633926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.634193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.634226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.634334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.634366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.634535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.634566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.634677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.634707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.634896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.634927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.635048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.635081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.635261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.635292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.635418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.635449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.635641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.635671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.635853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.635884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.635990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.636033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.636203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.636235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.636354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.636387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.636568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.636600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.636785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.636816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.637001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.637034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.637204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.637235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.637357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.637388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.637506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.637537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.226 [2024-12-15 05:37:04.637792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.226 [2024-12-15 05:37:04.637822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.226 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.638040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.638074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.638209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.638241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.638487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.638519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.638638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.638669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.638842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.638873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.639075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.639108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.639302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.639334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.639515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.639547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.639672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.639704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.639919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.639950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.640145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.640180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.640317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.640348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.640462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.640494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.640665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.640703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.640915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.640946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.641075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.641108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.641281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.641312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.641427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.641459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.641631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.641663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.641762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.641793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.641963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.642007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.642135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.642168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.642334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.642365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.642486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.642517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.642629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.642660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.642780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.642810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.643058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.643092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.643271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.643302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.643486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.643517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.643631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.643661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.643841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.643873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.644013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.644044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.644226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.644258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.644376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.227 [2024-12-15 05:37:04.644408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.227 qpair failed and we were unable to recover it. 00:36:51.227 [2024-12-15 05:37:04.644508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.644538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.644655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.644686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.644806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.644836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.645029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.645061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.645266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.645298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.645405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.645437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.645655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.645729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.645877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.645913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.646052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.646088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.646214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.646247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.646437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.646469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.646708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.646741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.646916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.646948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.647147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.647180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.647315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.647347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.647529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.647561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.647670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.647702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.647830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.647863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.648034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.648067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.648185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.648218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.648486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.648519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.648705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.648737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.648866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.648898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.649077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.649111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.649231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.649262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.649394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.649426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.649532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.649563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.649684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.649715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.649840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.649872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.650006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.228 [2024-12-15 05:37:04.650039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.228 qpair failed and we were unable to recover it. 00:36:51.228 [2024-12-15 05:37:04.650151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.650182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.650362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.650394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.650507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.650540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.650757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.650794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.650898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.650931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.651112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.651145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.651331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.651363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.651476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.651507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.651632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.651664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.651777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.651809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.651989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.652035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.652285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.652316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.652439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.652471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.652595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.652626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.652813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.652844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.653017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.653050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.653220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.653251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.653389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.653421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.653601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.653633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.653808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.653840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.653953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.653983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.654115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.654147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.654279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.654311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.654490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.654521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.654641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.654672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.654783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.654815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.654947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.654978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.655172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.655205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.655426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.655458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.655595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.655626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.655812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.655849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.656104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.656139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.656254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.656286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.656413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.656444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.656698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.656730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.656902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.229 [2024-12-15 05:37:04.656934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.229 qpair failed and we were unable to recover it. 00:36:51.229 [2024-12-15 05:37:04.657125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.657159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.657282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.657314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.657453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.657484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.657651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.657682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.657865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.657896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.658106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.658140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.658258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.658289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.658540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.658572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.658695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.658727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.658844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.658876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.659050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.659083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.659219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.659252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.659366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.659397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.659592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.659625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.659740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.659773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.659960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.660000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.660112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.660147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.660329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.660360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.660550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.660582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.660795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.660827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.660959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.661002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.661116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.661150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.661264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.661296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.661466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.661498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.661610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.661641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.661748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.661780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.661951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.661983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.662233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.662265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.662449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.662480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.662593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.662623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.662737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.662767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.662872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.662904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.663081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.663115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.663375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.663407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.663543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.663574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.663764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.663795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.663895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.230 [2024-12-15 05:37:04.663926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.230 qpair failed and we were unable to recover it. 00:36:51.230 [2024-12-15 05:37:04.664044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.664077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.664178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.664210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.664327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.664359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.664477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.664509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.664679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.664711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.664828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.664858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.665042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.665075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.665190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.665221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.665337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.665369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.665562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.665593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.665765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.665797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.665970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.666020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.666152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.666184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.666284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.666316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.666436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.666467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.666578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.666610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.666713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.666744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.666986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.667030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.667149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.667182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.667364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.667396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.667577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.667608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.667792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.667824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.668035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.668068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.668182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.668214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.668407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.668439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.668554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.668591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.668706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.668737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.668931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.668963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.669076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.669108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.669217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.669248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.669419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.669451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.669686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.669717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.669831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.669861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.670046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.670080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.670183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.670215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.670410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.670442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.670653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.670685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.670799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.670831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.671013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.671047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.231 [2024-12-15 05:37:04.671175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.231 [2024-12-15 05:37:04.671206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.231 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.671311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.671342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.671441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.671473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.671586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.671616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.671732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.671764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.671872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.671903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.672076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.672109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.672294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.672326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.672436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.672467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.672647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.672678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.672787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.672818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.672945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.672978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.673113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.673145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.673325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.673362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.673474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.673506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.673676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.673708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.673811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.673843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.674019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.674051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.674192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.674224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.674326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.674357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.674487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.674518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.674633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.674665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.674782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.674815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.675004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.675037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.675205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.675237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.675355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.675387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.675501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.675533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.675653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.675687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.675876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.675909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.676016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.676050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.676155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.676187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.676365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.676398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.676664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.676696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.676822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.676854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.676978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.677020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.677130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.677163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.677340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.677373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.677539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.677571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.677689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.677722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.232 [2024-12-15 05:37:04.677893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.232 [2024-12-15 05:37:04.677924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.232 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.678035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.678073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.678315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.678348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.678471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.678504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.678636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.678668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.678862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.678894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.679076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.679110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.679351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.679384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.679488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.679520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.679709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.679742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.679882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.679913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.680035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.680074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.680181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.680214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.680326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.680357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.680547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.680580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.680827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.680899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.681042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.681079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.681262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.681297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.681475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.681513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.681684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.681716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.681827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.681858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.682009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.682042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.682143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.682176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.682294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.682326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.682445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.682477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.682600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.682633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.682805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.682837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.683023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.683056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.683177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.683219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.683328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.683359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.683471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.683503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.683682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.233 [2024-12-15 05:37:04.683714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.233 qpair failed and we were unable to recover it. 00:36:51.233 [2024-12-15 05:37:04.683858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.683889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.684068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.684101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.684232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.684264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.684374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.684405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.684517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.684549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.684670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.684702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.684873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.684905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.685034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.685066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.685238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.685271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.685446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.685478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.685654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.685685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.685795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.685827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.686008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.686040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.686146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.686185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.686294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.686326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.686441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.686472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.686604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.686637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.686850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.686882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.687017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.687050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.687223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.687256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.687369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.687401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.687531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.687562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.687701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.687732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.687904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.687977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.688146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.688183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.688323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.688355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.688478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.688511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.688691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.688723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.688833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.688865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.689047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.689081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.689220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.689250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.689435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.689466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.689581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.689612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.689723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.689754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.689924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.689956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.690152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.690186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.690369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.690411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.690677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.690711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.234 [2024-12-15 05:37:04.690951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.234 [2024-12-15 05:37:04.690983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.234 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.691117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.691149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.691256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.691288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.691469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.691500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.691696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.691728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.691918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.691949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.692079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.692113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.692380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.692412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.692592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.692625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.692833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.692865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.693049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.693083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.693259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.693290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.693429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.693461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.693575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.693606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.693781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.693813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.693933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.693964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.694140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.694174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.694345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.694376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.694484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.694516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.694629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.694662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.694775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.694805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.694918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.694951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.695072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.695104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.695337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.695368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.695555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.695587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.695727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.695760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.695943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.695975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.696180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.696214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.696334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.696365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.696490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.696522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.696631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.696663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.696845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.696877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.697117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.697150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.697259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.697292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.697551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.697584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.697711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.697742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.697912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.697945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.698133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.698166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.698355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.235 [2024-12-15 05:37:04.698394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.235 qpair failed and we were unable to recover it. 00:36:51.235 [2024-12-15 05:37:04.698611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.698644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.698816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.698847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.699021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.699055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.699183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.699215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.699341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.699372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.699564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.699597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.699709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.699741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.699842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.699873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.699981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.700024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.700132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.700164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.700340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.700375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.700585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.700617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.700799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.700830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.701019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.701053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.701190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.701223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.701428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.701460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.701574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.701606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.701708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.701740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.701984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.702024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.702208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.702241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.702369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.702403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.702575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.702606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.702716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.702749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.702922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.702955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.703097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.703131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.703238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.703270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.703449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.703520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.703751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.703787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.703897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.703929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.704043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.704077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.704185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.704218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.704403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.704434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.704538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.704571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.704738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.704768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.704942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.704974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.705115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.705149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.705260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.705293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.236 [2024-12-15 05:37:04.705390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.236 [2024-12-15 05:37:04.705422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.236 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.705659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.705691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.705878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.705909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.706125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.706160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.706334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.706367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.706537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.706568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.706700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.706733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.706861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.706892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.707128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.707161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.707267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.707299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.707425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.707458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.707695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.707726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.707832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.707864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.708134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.708167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.708346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.708378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.708571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.708604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.708772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.708810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.708918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.708950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.709067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.709100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.709287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.709320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.709436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.709468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.709601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.709633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.709740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.709771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.709944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.709976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.710156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.710226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.710426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.710468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.710600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.710640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.710764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.710796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.710974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.711034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.711228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.711264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.711444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.711479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.711608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.711641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.711752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.711785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.712001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.712039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.237 [2024-12-15 05:37:04.712266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.237 [2024-12-15 05:37:04.712301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.237 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.712423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.712456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.712628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.712662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.712789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.712825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.713015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.713052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.713178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.713210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.713336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.713369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.713547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.713581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.713725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.713759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.713890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.713933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.714064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.714099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.714337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.714371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.714562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.714594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.714710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.714742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.714980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.715026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.715134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.715166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.715365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.715397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.715634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.715666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.715840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.715872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.716050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.716085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.716228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.716261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.716455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.716486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.716659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.716700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.716900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.716932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.717145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.717178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.717354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.717385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.717555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.717587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.717706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.717737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.717926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.717957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.718142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.718175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.718292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.718323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.718438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.718470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.718660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.718693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.718809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.718839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.719032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.719065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.719194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.719226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.719528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.719560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.719754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.719786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.238 [2024-12-15 05:37:04.720030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.238 [2024-12-15 05:37:04.720063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.238 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.720190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.720221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.720462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.720494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.720678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.720710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.720828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.720859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.721047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.721080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.721201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.721232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.721472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.721504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.721683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.721715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.721846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.721878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.722069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.722104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.722355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.722426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.722593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.722668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.722820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.722862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.723075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.723111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.723241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.723276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.723461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.723496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.723690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.723725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.723850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.723882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.724026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.724081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.724302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.724337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.724465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.724499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.724690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.724723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.724853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.724889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.725024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.725074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.725262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.725296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.725401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.725432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.725604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.725636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.725809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.725841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.725966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.726013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.726136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.726172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.726427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.726462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.726715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.726750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.726886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.726925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.727054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.727088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.727216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.727252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.727429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.239 [2024-12-15 05:37:04.727463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.239 qpair failed and we were unable to recover it. 00:36:51.239 [2024-12-15 05:37:04.727589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.727621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.727806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.727839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.728031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.728064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.728266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.728299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.728536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.728567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.728745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.728778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.728957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.728989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.729130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.729163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.729373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.729407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.729540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.729573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.729755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.729789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.729906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.729940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.730136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.730170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.730409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.730442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.730602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.730673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.730824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.730860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.731008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.731043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.731224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.731257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.731487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.731518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.731626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.731657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.731861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.731894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.732089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.732122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.732237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.732268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.732400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.732433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.732553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.732584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.732697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.732729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.732858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.732890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.733011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.733053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.733234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.733265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.733440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.733473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.733677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.733708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.733821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.733851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.733982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.734024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.734200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.734230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.734331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.734362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.734493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.734525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.734640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.734671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.734799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.734830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.734951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.240 [2024-12-15 05:37:04.734983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.240 qpair failed and we were unable to recover it. 00:36:51.240 [2024-12-15 05:37:04.735106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.735138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.735309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.735341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.735458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.735491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.735597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.735628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.735832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.735863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.736105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.736137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.736251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.736282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.736383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.736414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.736530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.736560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.736679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.736710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.736848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.736878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.736988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.737031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.737146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.737177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.737279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.737311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.737509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.737540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.737821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.737893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.738093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.738132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.738243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.738276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.738408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.738441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.738543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.738574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.738694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.738726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.738901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.738932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.739080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.739114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.739292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.739324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.739450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.739481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.739597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.739629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.739759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.739791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.739908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.739940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.740144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.740179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.740379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.740411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.740586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.740619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.740743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.740776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.740960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.741008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.741194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.741228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.741337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.741371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.741554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.741586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.741772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.241 [2024-12-15 05:37:04.741804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.241 qpair failed and we were unable to recover it. 00:36:51.241 [2024-12-15 05:37:04.741983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.742024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.742134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.742167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.742270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.742302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.742413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.742443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.742556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.742586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.742693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.742731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.742923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.742954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.743178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.743211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.743334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.743366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.743549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.743579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.743751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.743783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.743963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.744005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.744252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.744287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.744456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.744489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.744607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.744640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.744744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.744776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.744882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.744914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.745020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.745055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.745295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.745326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.745458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.745490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.745612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.745645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.745765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.745796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.746061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.746096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.746219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.746252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.746373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.746404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.746598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.746631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.746756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.746789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.746894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.746927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.747116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.747149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.747393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.747425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.747591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.747622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.747821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.747854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.242 qpair failed and we were unable to recover it. 00:36:51.242 [2024-12-15 05:37:04.747968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.242 [2024-12-15 05:37:04.748016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.748204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.748237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.748365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.748397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.748528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.748561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.748691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.748723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.748844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.748877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.749085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.749121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.749236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.749268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.749472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.749506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.749697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.749729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.749927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.749959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.750104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.750138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.750260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.750292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.750480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.750513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.750718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.750751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.750935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.750967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.751096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.751135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.751247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.751279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.751514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.751546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.751656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.751689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.751869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.751902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.752140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.752174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.752283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.752315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.752439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.752472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.752737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.752769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.752940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.752972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.753227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.753260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.753390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.753422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.753554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.753586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.753703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.753734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.753850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.753883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.754022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.754055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.754170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.754202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.754398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.754432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.754623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.754655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.754780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.754810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.243 [2024-12-15 05:37:04.754927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.243 [2024-12-15 05:37:04.754959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.243 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.755146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.755179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.755385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.755415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.755582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.755615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.755854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.755887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.756140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.756214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.756519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.756561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.756816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.756851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.756976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.757031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.757274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.757310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.757561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.757597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.757795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.757829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.757968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.758010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.758132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.758166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.758271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.758306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.758482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.758514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.758719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.758752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.758940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.758974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.759112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.759156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.759483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.759518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.759640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.759677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.759860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.759894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.760021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.760056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.760182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.760213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.760330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.760363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.760483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.760516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.760631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.760662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.760783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.760815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.761095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.761131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.761314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.761346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.761455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.761487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.761591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.761623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.761854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.761886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.762082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.762116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.762232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.762264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.762432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.762464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.762585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.762617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.762748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.762781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.244 [2024-12-15 05:37:04.762955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.244 [2024-12-15 05:37:04.763001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.244 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.763181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.763214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.763506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.763540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.763651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.763685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.763790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.763821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.764006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.764042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.764222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.764255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.764394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.764430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.764546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.764578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.764759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.764790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.764969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.765010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.765204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.765239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.765352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.765384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.765502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.765533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.765736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.765768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.765950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.765983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.766122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.766154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.766267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.766299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.766477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.766510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.766610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.766641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.766889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.766921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.767045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.767080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.767338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.767372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.767489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.767522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.767746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.767778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.767903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.767935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.768050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.768082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.768185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.768216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.768318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.768350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.768533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.768567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.768678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.768710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.768883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.768915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.769105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.769139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.769313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.769345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.769552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.769590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.769767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.769800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.769989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.770030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.770138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.770170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.770408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.770441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.770620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.245 [2024-12-15 05:37:04.770652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.245 qpair failed and we were unable to recover it. 00:36:51.245 [2024-12-15 05:37:04.770769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.770802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.770914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.770945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.771208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.771240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.771411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.771443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.771622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.771654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.771824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.771856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.772046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.772078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.772192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.772222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.772411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.772445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.772552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.772583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.772695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.772726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.772836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.772869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.773067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.773099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.773349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.773383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.773504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.773534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.773659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.773691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.773862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.773893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.774071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.774105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.774276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.774309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.774480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.774511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.774637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.774669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.774787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.774831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.774949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.774981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.775110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.775142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.775333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.775366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.775478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.775510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.775691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.775723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.775847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.775879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.776006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.776040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.776235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.776266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.776383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.776414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.776533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.776565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.776740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.776772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.776955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.776988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.777133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.777166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.777283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.777315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.777577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.777607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.777725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.246 [2024-12-15 05:37:04.777756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.246 qpair failed and we were unable to recover it. 00:36:51.246 [2024-12-15 05:37:04.777860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.777893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.778010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.778041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.778216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.778251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.778390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.778422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.778545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.778575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.778695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.778728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.778843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.778874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.778982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.779024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.779199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.779231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.779339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.779371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.779493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.779530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.779659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.779691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.779924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.779959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.780092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.780124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.780235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.780267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.780369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.780401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.780572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.780603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.780709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.780742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.780859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.780890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.781074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.781108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.781310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.781343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.781448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.781479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.781596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.781629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.781742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.781774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.782010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.782081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.782216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.782254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.782389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.782424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.782543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.782575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.782768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.782799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.247 [2024-12-15 05:37:04.783011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.247 [2024-12-15 05:37:04.783044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.247 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.783165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.783202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.783318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.783351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.783462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.783494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.783695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.783726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.783839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.783870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.784055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.784088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.784193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.784226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.784332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.784375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.784565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.784598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.784777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.784810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.784928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.784960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.785177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.785248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.785379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.785415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.785596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.785628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.785736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.785767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.785896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.785929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.786106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.786140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.786248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.786279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.786452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.786485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.786587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.786619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.786802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.786834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.786945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.786977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.787110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.787144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.787329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.787362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.787468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.787499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.787711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.787746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.787867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.787898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.788022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.788056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.788194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.788227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.788336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.788367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.788486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.788519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.788690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.788723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.788896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.788927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.789050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.789083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.789187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.789227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.789339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.789371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.789481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.789513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.789692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.789722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.789921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.248 [2024-12-15 05:37:04.789954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.248 qpair failed and we were unable to recover it. 00:36:51.248 [2024-12-15 05:37:04.790137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.790169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.790298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.790330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.790448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.790479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.790651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.790681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.790810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.790842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.791028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.791062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.791165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.791198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.791371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.791401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.791531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.791561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.791688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.791722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.791957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.791989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.792121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.792153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.792327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.792357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.792464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.792496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.792601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.792632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.792735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.792766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.792958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.793003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.793195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.793228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.793337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.793368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.793486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.793519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.793695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.793726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.793912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.793943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.794195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.794228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.794356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.794387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.794570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.794604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.794778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.794810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.794984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.795027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.795266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.795300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.795499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.795531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.795725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.795757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.796016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.796049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.796237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.796270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.796452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.796484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.796659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.796692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.796898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.796930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.797125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.249 [2024-12-15 05:37:04.797159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.249 qpair failed and we were unable to recover it. 00:36:51.249 [2024-12-15 05:37:04.797422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.797493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.797624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.797660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.797781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.797815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.797933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.797964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.798216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.798287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.798467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.798539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.798770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.798805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.798925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.798957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.799080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.799114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.799225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.799256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.799377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.799411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.799619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.799650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.799764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.799795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.799983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.800027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.800207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.800238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.800415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.800450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.800641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.800672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.800776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.800808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.800984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.801028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.801301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.801333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.801594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.801627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.801815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.801848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.801950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.801981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.802103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.802136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.802346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.802379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.802619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.802652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.802759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.802792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.803033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.803105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.803322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.803359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.803540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.803571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.803701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.803732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.803909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.803944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.804122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.804155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.804287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.250 [2024-12-15 05:37:04.804318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.250 qpair failed and we were unable to recover it. 00:36:51.250 [2024-12-15 05:37:04.804513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.804546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.804726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.804757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.804935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.804967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.805091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.805125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.805232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.805263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.805377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.805408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.805611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.805653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.805785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.805817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.806005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.806038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.806142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.806173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.806280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.806312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.806551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.806583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.806845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.806876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.807047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.807080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.807279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.807311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.807486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.807516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.807707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.807738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.807909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.807941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.808141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.808173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.808354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.808385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.808496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.808529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.808785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.808816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.809000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.809032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.809209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.809242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.809436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.809466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.809660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.809691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.809878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.809910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.810012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.810045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.810160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.810191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.810361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.810393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.810569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.810600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.810804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.810835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.251 qpair failed and we were unable to recover it. 00:36:51.251 [2024-12-15 05:37:04.811021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.251 [2024-12-15 05:37:04.811055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.811159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.811195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.811325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.811356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.811595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.811627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.811888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.811919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.812094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.812127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.812303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.812334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.812448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.812480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.812584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.812615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.812745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.812777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.813012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.813044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.813169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.813201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.813387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.813417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.813595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.813625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.813814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.813846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.814045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.814077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.814341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.814371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.814543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.814575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.814698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.814728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.814898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.814930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.815103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.815137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.815323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.815355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.815532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.815564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.815764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.815796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.815927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.815958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.816085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.816118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.816291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.816323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.816568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.816599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.816852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.816884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.816985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.817026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.817292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.817322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.817532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.817564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.817680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.817712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.817829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.817859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.817977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.818014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.818203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.818235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.818369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.818400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.818606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.818637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.252 [2024-12-15 05:37:04.818900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.252 [2024-12-15 05:37:04.818932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.252 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.819180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.819212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.819468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.819500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.819787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.819830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.819945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.819976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.820170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.820202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.820388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.820420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.820543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.820574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.820675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.820706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.820944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.820976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.821180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.821211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.821397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.821428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.821544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.821576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.821771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.821802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.822038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.822071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.822243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.822274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.822523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.822554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.822818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.822850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.822963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.823018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.823155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.823188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.823425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.823457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.823700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.823731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.824002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.824035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.824216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.824248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.824440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.824470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.824590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.824621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.824861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.824894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.825176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.825208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.825330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.825361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.825535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.825567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.825831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.825862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.826049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.826082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.826286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.826319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.826540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.826571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.826802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.826834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.253 [2024-12-15 05:37:04.827018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.253 [2024-12-15 05:37:04.827051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.253 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.827245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.827277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.827481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.827513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.827695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.827726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.827848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.827880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.827984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.828025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.828222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.828262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.828385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.828416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.828533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.828570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.828691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.828723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.828902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.828933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.829081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.829113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.829283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.829315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.829483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.829514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.829683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.829715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.829836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.829868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.830038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.830070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.830250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.830282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.830519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.830551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.830667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.830698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.830872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.830903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.831037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.831070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.831248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.831280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.831464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.831494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.831663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.831694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.831873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.831905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.832038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.832069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.832275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.832306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.832486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.832518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.832724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.832755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.832935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.832966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.833115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.833147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.833406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.833437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.833617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.833648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.833774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.833806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.833987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.834030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.834290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.834321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.834504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.834537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.834743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.254 [2024-12-15 05:37:04.834774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.254 qpair failed and we were unable to recover it. 00:36:51.254 [2024-12-15 05:37:04.834959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.834990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.835131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.835164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.835360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.835391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.835492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.835523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.835692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.835724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.835909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.835941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.836088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.836121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.836371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.836402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.836637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.836668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.836792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.836829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.837066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.837099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.837289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.837321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.837457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.837489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.837730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.837761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.837888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.837919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.838046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.838079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.838263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.838293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.838466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.838497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.838614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.838647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.838888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.838919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.839177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.839210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.839325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.839356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.839524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.839556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.839778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.839809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.840003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.840036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.840206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.840239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.840415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.840446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.840562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.840593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.840762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.840794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.840910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.840941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.841131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.841164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.841282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.841314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.841497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.841528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.841646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.841678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.841857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.841889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.842071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.842104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.842350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.842382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.842502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.255 [2024-12-15 05:37:04.842533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.255 qpair failed and we were unable to recover it. 00:36:51.255 [2024-12-15 05:37:04.842633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.842665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.842781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.842812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.843046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.843079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.843280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.843313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.843491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.843522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.843764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.843795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.844007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.844041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.844233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.844264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.844375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.844406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.844520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.844551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.844791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.844821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.845023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.845061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.845185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.845216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.845496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.845527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.845715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.845746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.845950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.845981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.846172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.846203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.846464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.846495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.846735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.846766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.846948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.846979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.847171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.847204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.847448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.847480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.847731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.847762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.847966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.848005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.848135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.848166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.848430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.848461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.848677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.848708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.848889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.848920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.849105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.849138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.849312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.256 [2024-12-15 05:37:04.849343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.256 qpair failed and we were unable to recover it. 00:36:51.256 [2024-12-15 05:37:04.849523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.849555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.849673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.849704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.849815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.849846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.850084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.850116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.850220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.850251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.850427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.850458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.850627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.850658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.850846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.850876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.851018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.851051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.851292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.851323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.851585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.851616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.851869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.851900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.852109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.852143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.852351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.852383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.852561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.852592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.852709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.852740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.852916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.852946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.853126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.853164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.853343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.853374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.853558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.853589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.853824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.853855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.854033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.854074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.854189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.854220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.854428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.854459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.854646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.854678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.854917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.854947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.855194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.855226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.855410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.855442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.855639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.855669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.855879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.855911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.856035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.856069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.856307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.856337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.856470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.856501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.856744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.856776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.857024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.857057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.857169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.857201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.857375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.257 [2024-12-15 05:37:04.857407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.257 qpair failed and we were unable to recover it. 00:36:51.257 [2024-12-15 05:37:04.857523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.857555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.857815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.857847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.858036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.858068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.858185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.858217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.858465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.858497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.858614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.858644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.858901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.858932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.859099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.859139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.859352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.859383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.859562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.859593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.859776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.859809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.860013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.860046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.860240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.860271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.860442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.860475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.860662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.860693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.860872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.860903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.861158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.861192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.861403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.861436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.861615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.861646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.861771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.861801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.862003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.862037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.862215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.862246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.862482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.862513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.862720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.862752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.862888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.862929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.863123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.863156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.863328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.863360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.863486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.863517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.863721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.863752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.863960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.864001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.864178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.864210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.864342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.864374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.864585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.864616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.864798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.864830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.865031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.865064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.865181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.258 [2024-12-15 05:37:04.865212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.258 qpair failed and we were unable to recover it. 00:36:51.258 [2024-12-15 05:37:04.865386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.865418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.865696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.865726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.865869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.865901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.866087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.866120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.866287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.866318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.866553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.866585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.866773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.866805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.866981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.867023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.867240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.867272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.867406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.867438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.867610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.867641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.867752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.867783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.867904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.867936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.868087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.868119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.868243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.868274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.868552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.868622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.868900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.868936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.869076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.869111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.869293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.869325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.869562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.869593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.869756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.869788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.870001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.870034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.870215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.870246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.870418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.870450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.870687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.870718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.870855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.870885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.871011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.871044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.871235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.871267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.871536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.871576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.871710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.871741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.871911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.871943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.872167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.872199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.259 [2024-12-15 05:37:04.872332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.259 [2024-12-15 05:37:04.872363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.259 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.872557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.872590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.872855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.872887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.873072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.873107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.873345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.873378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.873616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.873648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.873821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.873852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.874033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.874066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.874245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.874276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.874512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.874545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.874738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.874769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.874891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.874922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.875035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.875069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.875240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.875271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.875457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.875487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.875754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.875784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.875998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.876031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.876142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.876174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.876289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.544 [2024-12-15 05:37:04.876321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.544 qpair failed and we were unable to recover it. 00:36:51.544 [2024-12-15 05:37:04.876457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.876488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.876680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.876711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.876834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.876866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.877131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.877163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.877338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.877370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.877545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.877577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.877702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.877733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.877942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.877973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.878169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.878202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.878382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.878414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.878526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.878558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.878727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.878759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.878975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.879015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.879150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.879183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.879368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.879401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.879572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.879603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.879846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.879878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.880068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.880107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.880390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.880423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.880556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.880588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.880712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.880743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.881014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.881047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.881232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.881264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.881438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.881470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.881586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.881617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.881827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.881859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.882034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.882068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.882244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.882276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.882472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.882505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.882708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.882740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.882908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.882939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.883054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.883087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.883272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.883303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.883417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.883449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.883634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.883667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.883848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.883878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.884016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.884049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.884186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.884217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.884343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.884374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.884558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.884590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.884787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.884819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.884942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.884973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.885158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.885189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.885322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.885353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.885585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.885658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.885922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.886001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.886228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.886265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.886387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.886420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.886677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.886709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.886973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.887016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.887278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.887309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.887494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.887526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.887769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.887801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.887931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.887963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.888227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.888269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.888466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.888500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.888622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.888655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.888765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.888806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.545 qpair failed and we were unable to recover it. 00:36:51.545 [2024-12-15 05:37:04.888917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.545 [2024-12-15 05:37:04.888949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.889228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.889261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.889465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.889497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.889669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.889701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.889876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.889907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.890015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.890048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.890172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.890202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.890330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.890361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.890564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.890596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.890725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.890756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.891016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.891050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.891174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.891205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.891376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.891408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.891627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.891663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.891806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.891841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.891960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.891999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.892236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.892268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.892396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.892427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.892603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.892635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.892756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.892788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.892922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.892953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.893202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.893236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.893425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.893457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.893575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.893606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.893718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.893749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.893939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.893971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.894123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.894165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.894288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.894320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.894509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.894540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.894643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.894675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.894799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.894830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.895014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.895048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.895221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.895253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.895428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.895460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.895646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.895677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.895871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.895902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.896089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.896123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.896315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.896347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.896522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.896554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.896735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.896766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.896962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.897000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.897190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.897223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.897391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.897423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.897613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.897645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.897811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.897842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.898034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.898067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.898252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.898283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.898547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.898579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.898843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.898875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.898978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.899019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.899212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.899244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.899453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.899485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.899725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.899755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.900023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.900057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.900195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.900227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.900484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.900516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.900794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.900825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.901029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.901062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.901267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.901299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.901408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.546 [2024-12-15 05:37:04.901438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.546 qpair failed and we were unable to recover it. 00:36:51.546 [2024-12-15 05:37:04.901619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.901650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.901786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.901819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.902012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.902045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.902309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.902340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.902536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.902568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.902681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.902713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.903006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.903045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.903181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.903215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.903453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.903485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.903767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.903797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.903923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.903956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.904093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.904126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.904306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.904338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.904518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.904549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.904679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.904709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.904894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.904927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.905051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.905084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.905191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.905222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.905346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.905379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.905572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.905603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.905791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.905824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.906002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.906035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.906153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.906184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.906372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.906403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.906525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.906557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.906755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.906786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.906888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.906919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.907039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.907078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.907259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.907290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.907463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.907495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.907612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.907645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.907815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.907846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.908045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.908078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.908339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.547 [2024-12-15 05:37:04.908372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.547 qpair failed and we were unable to recover it. 00:36:51.547 [2024-12-15 05:37:04.908540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.549 [2024-12-15 05:37:04.908571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.549 qpair failed and we were unable to recover it. 00:36:51.549 [2024-12-15 05:37:04.908761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.549 [2024-12-15 05:37:04.908793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.549 qpair failed and we were unable to recover it. 00:36:51.549 [2024-12-15 05:37:04.908976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.549 [2024-12-15 05:37:04.909021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.549 qpair failed and we were unable to recover it. 00:36:51.549 [2024-12-15 05:37:04.909288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.549 [2024-12-15 05:37:04.909321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.549 qpair failed and we were unable to recover it. 00:36:51.549 [2024-12-15 05:37:04.909443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.549 [2024-12-15 05:37:04.909475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.549 qpair failed and we were unable to recover it. 00:36:51.549 [2024-12-15 05:37:04.909657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.549 [2024-12-15 05:37:04.909689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.549 qpair failed and we were unable to recover it. 00:36:51.549 [2024-12-15 05:37:04.909808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.549 [2024-12-15 05:37:04.909840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.549 qpair failed and we were unable to recover it. 00:36:51.549 [2024-12-15 05:37:04.910044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.549 [2024-12-15 05:37:04.910078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.549 qpair failed and we were unable to recover it. 00:36:51.549 [2024-12-15 05:37:04.910184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.549 [2024-12-15 05:37:04.910215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.549 qpair failed and we were unable to recover it. 00:36:51.549 [2024-12-15 05:37:04.910385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.549 [2024-12-15 05:37:04.910417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.549 qpair failed and we were unable to recover it. 00:36:51.549 [2024-12-15 05:37:04.910598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.549 [2024-12-15 05:37:04.910630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.549 qpair failed and we were unable to recover it. 00:36:51.549 [2024-12-15 05:37:04.910801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.549 [2024-12-15 05:37:04.910832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.549 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.911008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.911046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.911229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.911261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.911437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.911468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.911643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.911674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.911940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.911972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.912164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.912197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.912451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.912483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.912671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.912703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.912916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.912948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.913083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.913116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.913221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.913253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.913512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.913544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.913817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.913848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.914028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.914062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.914245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.914276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.914543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.914575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.914695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.914726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.914835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.914867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.915001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.915035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.915270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.915301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.915421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.915452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.915709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.915742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.915920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.915950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.916074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.916107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.916340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.916373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.916495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.916526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.916700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.916731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.916875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.916908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.917115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.917149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.917288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.917319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.917586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.917618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.917799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.917830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.918009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.918043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.918179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.918213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.918383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.918414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.918654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.918686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.918945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.918977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.919113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.919144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.919408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.919440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.919716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.919749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.919921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.919959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.920219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.920253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.920441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.920474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.920712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.920744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.920926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.920958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.921088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.921121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.921375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.921408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.921598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.921630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.921814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.921846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.921968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.922010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.922282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.922314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.922557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.922589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.922836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.922867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.923051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.923085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.923289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.923321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.923589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.550 [2024-12-15 05:37:04.923622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.550 qpair failed and we were unable to recover it. 00:36:51.550 [2024-12-15 05:37:04.923801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.923834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.924024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.924057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.924179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.924210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.924350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.924382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.924562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.924594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.924779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.924811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.925037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.925070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.925175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.925207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.925330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.925361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.925551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.925583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.925696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.925728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.925971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.926012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.926224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.926256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.926491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.926522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.926789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.926821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.927012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.927045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.927246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.927278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.927470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.927502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.927688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.927720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.927903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.927934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.928223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.928256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.928466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.928496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.928624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.928656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.928903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.928935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.929059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.929098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.929234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.929266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.929383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.929416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.929618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.929649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.929818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.929850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.930134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.930169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.930340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.930370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.930488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.930520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.930714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.930746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.930967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.931005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.931253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.931284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.931457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.931490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.931765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.931796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.931983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.932023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.932232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.932264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.932432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.932464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.932654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.932686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.932860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.932891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.933034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.933068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.933256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.933288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.933393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.933424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.933655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.933688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.933860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.933891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.934153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.934186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.934471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.934503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.934686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.934717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.934966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.935006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.935185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.935223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.935413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.935444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.935635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.935667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.935922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.935952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.936209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.936242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.936447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.936479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.936777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.936807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.937041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.551 [2024-12-15 05:37:04.937075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.551 qpair failed and we were unable to recover it. 00:36:51.551 [2024-12-15 05:37:04.937193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.937225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.937345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.937376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.937564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.937595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.937851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.937883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.938072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.938104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.938293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.938325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.938538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.938570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.938737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.938768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.938890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.938922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.939171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.939205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.939384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.939415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.939662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.939695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.939844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.939876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.940056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.940090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.940327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.940359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.940477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.940510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.940693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.940725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.940838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.940869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.941073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.941107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.941307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.941339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.941476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.941508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.941683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.941713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.941817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.941849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.941969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.942008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.942190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.942222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.942470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.942502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.942689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.942721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.942908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.942939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.943139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.943172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.943408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.943441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.943622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.943654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.943841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.943872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.944076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.944114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.944287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.944318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.944448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.944480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.944674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.944706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.944895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.944927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.945096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.945130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.945324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.945355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.945528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.945561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.945683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.945715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.945950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.945982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.946162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.946194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.946321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.946352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.946472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.946504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.946626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.552 [2024-12-15 05:37:04.946658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.552 qpair failed and we were unable to recover it. 00:36:51.552 [2024-12-15 05:37:04.946886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.946917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.947107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.947142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.947320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.947352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.947540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.947571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.947850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.947882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.948058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.948091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.948348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.948379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.948607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.948639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.948815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.948846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.948971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.949016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.949295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.949327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.949456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.949488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.949675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.949707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.949949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.949982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.950191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.950229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.950473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.950505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.950755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.950788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.950975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.951015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.951199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.951231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.951458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.951489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.951604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.951634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.951834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.951866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.952117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.952151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.952341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.952379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.952575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.952606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.952886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.952919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.953107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.953147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.953342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.953373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.953522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.953554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.953734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.953765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.954024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.954058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.954257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.954294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.954476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.954509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.954688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.954720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.954905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.954936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.955182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.955215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.955454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.955488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.955687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.955718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.955914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.955947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.956146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.956180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.956302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.956335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.956451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.956484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.956808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.956840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.956970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.957011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.957148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.957180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.957299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.957333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.957535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.957575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.957764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.957795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.958080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.958116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.958291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.958325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.958589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.958628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.958805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.958835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.958983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.959023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.959205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.959243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.959430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.959464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.959669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.959706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.959894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.959927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.960051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.960092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.553 qpair failed and we were unable to recover it. 00:36:51.553 [2024-12-15 05:37:04.960357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.553 [2024-12-15 05:37:04.960391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.960533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.960576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.960707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.960738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.960909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.960942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.961147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.961181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.961369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.961402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.961589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.961631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.961898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.961931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.962156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.962198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.962409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.962443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.962692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.962728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.963009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.963044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.963163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.963196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.963325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.963357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.963487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.963524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.963771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.963803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.963985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.964028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.964211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.964242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.964490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.964523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.964781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.964813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.965060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.965094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.965243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.965275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.965535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.965567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.965808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.965840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.966101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.966134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.966311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.966343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.966577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.966609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.966865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.966897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.967084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.967117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.967360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.967392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.967656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.967687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.967870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.967902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.968100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.968134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.968315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.968347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.968463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.968494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.968680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.968713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.969033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.969067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.969184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.969215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.969418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.969449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.969638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.969670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.969842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.969874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.970076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.970110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.970281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.970313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.970575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.970607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.970846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.970878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.971087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.971120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.971333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.971364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.971486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.971518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.971623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.971661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.971775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.971807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.972012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.972045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.972231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.972263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.972503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.972534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.972671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.972703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.973034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.973068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.973255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.973287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.973545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.973577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.973691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.973723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.973982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.974025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.554 [2024-12-15 05:37:04.974153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.554 [2024-12-15 05:37:04.974190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.554 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.974307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.974340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.974527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.974559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.974703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.974734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.974940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.974971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.975163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.975196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.975384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.975416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.975602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.975634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.975839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.975871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.976060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.976093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.976214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.976246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.976435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.976467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.976707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.976739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.976861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.976893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.977099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.977132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.977307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.977339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.977592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.977625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.977821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.977853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.978078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.978110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.978404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.978436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.978567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.978598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.978771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.978802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.979040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.979073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.979193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.979224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.979425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.979457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.979572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.979604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.979734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.979765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.979888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.979920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.980088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.980123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.980333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.980371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.980495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.980526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.980695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.980726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.980961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.981002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.981133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.981165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.981371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.981403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.981662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.981694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.981901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.981935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.982063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.982100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.982279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.982311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.982500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.982534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.982796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.982830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.983092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.983128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.983314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.983348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.983498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.983538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.983736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.983771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.983955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.984004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.555 qpair failed and we were unable to recover it. 00:36:51.555 [2024-12-15 05:37:04.984197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.555 [2024-12-15 05:37:04.984232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.984443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.984477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.984585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.984618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.984804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.984837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.985016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.985049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.985305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.985338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.985574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.985608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.985800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.985833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.986046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.986079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.986286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.986319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.986446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.986478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.986601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.986633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.986753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.986786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.986970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.987009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.987198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.987230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.987515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.987548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.987749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.987781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.988039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.988073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.988256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.988288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.988404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.988435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.988614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.988647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.988822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.988854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.989028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.989061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.989198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.989237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.989377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.989409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.989577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.989609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.989849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.989881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.990070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.990107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.990282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.990313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.990489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.990519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.990693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.990725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.990916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.990947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.991128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.991160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.991335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.991367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.991560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.991593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.991841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.991874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.991990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.992056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.992242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.992274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.992510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.992543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.992741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.992773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.992945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.992976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.993167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.993203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.993382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.993416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.993528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.993559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.993696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.993727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.993911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.993944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.994127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.994159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.994286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.994318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.994438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.994469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.994606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.994637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.994801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.994878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.995036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.995076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.995279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.995315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.995511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.995556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.995674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.995717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.556 [2024-12-15 05:37:04.995899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.556 [2024-12-15 05:37:04.995930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.556 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.996069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.996104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.996288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.996326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.996572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.996607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.996786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.996820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.996942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.996978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.997175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.997211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.997451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.997489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.997753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.997798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.998014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.998050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.998227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.998260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.998412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.998449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.998661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.998694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.998866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.998900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.999085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.999122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.999261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.999296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.999606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.999640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:04.999906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:04.999943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.000139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.000179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.000363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.000396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.000529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.000562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.000677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.000713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.000838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.000871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.001006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.001039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.001292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.001327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.001594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.001629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.001800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.001832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.002013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.002046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.002336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.002368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.002539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.002571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.002834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.002866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.003071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.003105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.003371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.003403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.003582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.003619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.003798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.003832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.004085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.004120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.004338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.004374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.004512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.004544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.004785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.004817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.004942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.004974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.005174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.005207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.005468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.005500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.005670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.005701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.005951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.005985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.006119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.006157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.006340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.006371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.006641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.006673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.006864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.006896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.007101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.007148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.007336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.007367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.007549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.007583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.007715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.007747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.557 qpair failed and we were unable to recover it. 00:36:51.557 [2024-12-15 05:37:05.007921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.557 [2024-12-15 05:37:05.007953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.008157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.008191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.008475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.008507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.008625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.008657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.008856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.008889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.009068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.009108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.009241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.009273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.009408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.009441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.009662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.009696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.009885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.009929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.010126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.010160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.010357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.010391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.010544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.010578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.010760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.010792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.011007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.011043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.011169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.011202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.011404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.011446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.011649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.011684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.011862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.011896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.012035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.012070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.012271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.012304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.012512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.012551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.012725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.012757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.012963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.013012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.013259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.013292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.013491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.013524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.013641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.013673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.013922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.013959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.014112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.014146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.014416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.014449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.014710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.014747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.014950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.014983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.015135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.015169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.015446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.015479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.015683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.015721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.015927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.015962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.016162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.016204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.016396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.016431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.016617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.016652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.016838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.016871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.017064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.017099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.017271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.017304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.017542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.017575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.017744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.017776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.017949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.017990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.018191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.018222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.018403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.018434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.018606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.018638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.018828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.018860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.019030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.019063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.019250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.019284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.019462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.019495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.019755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.019786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.019918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.019950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.020132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.020165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.558 [2024-12-15 05:37:05.020402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.558 [2024-12-15 05:37:05.020433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.558 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.020611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.020644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.020774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.020807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.021022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.021062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.021244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.021277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.021513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.021545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.021807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.021840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.022117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.022151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.022340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.022373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.022486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.022519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.022770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.022802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.023005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.023038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.023338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.023370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.023560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.023593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.023832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.023863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.024100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.024133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.024315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.024348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.024589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.024620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.024877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.024910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.025106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.025140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.025331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.025362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.025497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.025534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.025774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.025807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.025932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.025964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.026164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.026198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.026436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.026469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.026594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.026627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.026834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.026866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.027035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.027069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.027252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.027284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.027519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.027551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.027787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.027819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.028055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.028088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.028293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.028326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.028528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.028560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.028803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.028835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.029103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.029137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.029311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.029342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.029599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.029631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.029803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.029834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.029951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.029983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.030194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.030228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.030464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.030495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.030754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.030787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.030903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.030935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.031066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.031098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.031283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.031315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.031575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.031607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.559 [2024-12-15 05:37:05.031853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.559 [2024-12-15 05:37:05.031886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.559 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.032002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.032036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.032225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.032257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.032433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.032465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.032651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.032683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.032863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.032895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.033137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.033171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.033431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.033463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.033586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.033617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.033741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.033773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.033877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.033909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.034105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.034137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.034305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.034337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.034439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.034477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.034739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.034771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.034885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.034917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.035112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.035146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.035321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.035352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.035586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.035619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.035803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.035836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.036036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.036090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.036353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.036385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.036574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.036606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.036724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.036756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.036891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.036923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.037044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.037078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.037252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.037284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.037425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.037457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.037645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.037679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.037937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.037969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.038098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.038132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.038302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.038334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.038526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.038558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.038728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.038760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.039022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.039054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.039314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.039346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.039476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.039508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.039687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.039718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.039899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.039932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.040179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.040212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.040441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.040512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.040772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.040808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.040984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.041029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.041212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.041244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.041498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.041530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.041710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.041740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.041854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.041885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.042076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.042109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.042285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.042316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.042498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.042529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.042737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.042769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.042887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.042918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.043197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.043230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.043397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.043438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.043559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.560 [2024-12-15 05:37:05.043590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.560 qpair failed and we were unable to recover it. 00:36:51.560 [2024-12-15 05:37:05.043796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.043827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.044091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.044124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.044253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.044284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.044495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.044527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.044631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.044663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.044832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.044862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.045067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.045099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.045308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.045339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.045459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.045491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.045609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.045640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.045887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.045919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.046087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.046121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.046319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.046350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.046475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.046507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.046708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.046738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.046933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.046965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.047156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.047188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.047448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.047479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.047665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.047697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.047949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.047980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.048173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.048205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.048453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.048484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.048614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.048644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.048827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.048859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.049030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.049064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.049267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.049304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.049545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.049577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.049749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.049781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.049965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.050003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.050194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.050226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.050408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.050439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.050607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.050638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.561 [2024-12-15 05:37:05.050819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.561 [2024-12-15 05:37:05.050850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.561 qpair failed and we were unable to recover it. 00:36:51.564 [2024-12-15 05:37:05.051032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.564 [2024-12-15 05:37:05.051065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.564 qpair failed and we were unable to recover it. 00:36:51.564 [2024-12-15 05:37:05.051324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.564 [2024-12-15 05:37:05.051355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.564 qpair failed and we were unable to recover it. 00:36:51.564 [2024-12-15 05:37:05.051541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.564 [2024-12-15 05:37:05.051573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.564 qpair failed and we were unable to recover it. 00:36:51.564 [2024-12-15 05:37:05.051697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.564 [2024-12-15 05:37:05.051728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.564 qpair failed and we were unable to recover it. 00:36:51.564 [2024-12-15 05:37:05.051919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.564 [2024-12-15 05:37:05.051950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.564 qpair failed and we were unable to recover it. 00:36:51.564 [2024-12-15 05:37:05.052205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.564 [2024-12-15 05:37:05.052237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.564 qpair failed and we were unable to recover it. 00:36:51.564 [2024-12-15 05:37:05.052426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.564 [2024-12-15 05:37:05.052457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.564 qpair failed and we were unable to recover it. 00:36:51.564 [2024-12-15 05:37:05.052581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.564 [2024-12-15 05:37:05.052612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.564 qpair failed and we were unable to recover it. 00:36:51.564 [2024-12-15 05:37:05.052791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.564 [2024-12-15 05:37:05.052823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.564 qpair failed and we were unable to recover it. 00:36:51.564 [2024-12-15 05:37:05.053063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.564 [2024-12-15 05:37:05.053096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.564 qpair failed and we were unable to recover it. 00:36:51.564 [2024-12-15 05:37:05.053204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.564 [2024-12-15 05:37:05.053236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.564 qpair failed and we were unable to recover it. 00:36:51.564 [2024-12-15 05:37:05.053497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.053529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.053640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.053671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.053788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.053819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.053988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.054029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.054285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.054317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.054435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.054465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.054579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.054611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.054800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.054830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.054959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.054990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.055190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.055221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.055403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.055434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.055599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.055630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.055828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.055860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.056033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.056066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.056327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.056358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.056546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.056578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.056774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.056805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.056929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.056959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.057196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.057229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.057355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.057387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.057506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.057538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.057780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.057817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.058002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.058035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.058151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.058183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.058283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.058314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.058549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.058581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.058698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.058729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.058857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.058888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.059074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.059107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.059304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.059336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.059575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.059606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.059793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.059825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.059951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.059983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.060125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.060158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.565 qpair failed and we were unable to recover it. 00:36:51.565 [2024-12-15 05:37:05.060302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.565 [2024-12-15 05:37:05.060335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.060563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.060595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.060830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.060861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.061181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.061214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.061453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.061485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.061722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.061755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.061925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.061956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.062232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.062265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.062385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.062417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.062653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.062685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.062874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.062904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.063106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.063139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.063309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.063340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.063509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.063541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.063730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.063762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.063942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.063973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.064243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.064276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.064399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.064431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.064604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.064635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.064872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.064904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.065052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.065086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.065260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.065292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.065496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.065528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.065734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.065764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.065963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.066002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.066238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.066269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.066398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.066430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.066599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.066635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.066749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.066781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.067042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.067074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.067261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.067292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.067480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.067513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.067775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.067805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.067989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.068112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.068300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.068331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.068458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.068490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.068752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.068784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.068973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.069027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.069221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.069253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.069437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.069468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.069638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.069670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.069863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.069894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.070067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.070101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.070289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.070320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.070457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.070488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.070675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.070707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.070947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.070979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.071123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.071155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.071413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.071444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.071574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.071605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.071817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.071849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.071973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.072013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.072185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.072216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.072412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.072443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.072649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.566 [2024-12-15 05:37:05.072681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.566 qpair failed and we were unable to recover it. 00:36:51.566 [2024-12-15 05:37:05.072856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.072887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.073066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.073099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.073219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.073251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.073435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.073467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.073709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.073739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.073979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.074023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.074195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.074227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.074355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.074385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.074550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.074581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.074697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.074729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.074936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.074967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.075112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.075144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.075311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.075349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.075472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.075503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.075737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.075768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.075962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.076002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.076212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.076245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.076432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.076463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.076719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.076751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.076938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.076969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.077170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.077203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.077385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.077417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.077592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.077623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.077753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.077785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.077959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.078001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.078206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.078238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.078512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.078545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.078729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.078761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.078952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.078983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.079227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.079259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.079427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.079458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.079574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.079605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.079784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.079816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.080026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.080060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.080234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.080266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.080393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.080425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.080645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.080676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.080937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.080969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.081171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.081205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.081494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.081526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.081707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.081738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.081913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.081945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.082069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.082102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.082268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.082299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.082469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.082501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.082681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.082712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.082907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.082938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.083060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.083093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.083217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.083249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.083425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.083456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.083622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.083653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.567 qpair failed and we were unable to recover it. 00:36:51.567 [2024-12-15 05:37:05.083774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.567 [2024-12-15 05:37:05.083806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.084044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.084083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.084332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.084363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.084533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.084566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.084801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.084832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.085026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.085060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.085257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.085289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.085465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.085497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.085733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.085764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.085878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.085910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.086170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.086204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.086444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.086475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.086656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.086688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.086930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.086962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.087096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.087128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.087306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.087338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.087616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.087646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.087769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.087801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.087983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.088025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.088241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.088273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.088401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.088432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.088547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.088579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.088822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.088853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.089048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.089081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.089349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.089380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.089595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.089627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.089866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.089897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.090088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.090121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.090313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.090344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.090588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.090621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.090815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.090847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.091126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.091158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.091352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.091383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.091643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.091675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.568 qpair failed and we were unable to recover it. 00:36:51.568 [2024-12-15 05:37:05.091857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.568 [2024-12-15 05:37:05.091888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.092079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.092113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.092238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.092270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.092455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.092487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.092601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.092632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.092897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.092929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.093207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.093241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.093344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.093381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.093562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.093593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.093697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.093729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.093914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.093945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.094071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.094104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.094222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.094254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.094441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.094472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.094647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.094679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.094886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.094917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.095103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.095136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.095328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.095359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.095462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.095493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.095784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.095816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.096015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.096048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.096172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.096203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.096334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.096365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.569 qpair failed and we were unable to recover it. 00:36:51.569 [2024-12-15 05:37:05.096490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.569 [2024-12-15 05:37:05.096521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.096757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.096788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.096960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.097001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.097197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.097229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.097396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.097426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.097538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.097570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.097808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.097840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.098109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.098143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.098402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.098434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.098630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.098661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.098842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.098873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.099063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.099096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.099221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.099253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.099366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.099398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.099580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.099610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.099852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.099884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.100122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.100155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.100339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.100371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.100591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.100622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.100742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.100774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.100958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.100989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.101195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.101226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.101410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.101441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.101568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.101599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.101826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.101864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.102040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.102073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.102258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.102289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.102458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.102490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.102694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.102725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.102903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.102935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.103051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.103083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.103255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.103286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.103493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.103525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.103777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.103808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.103986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.104026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.104221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.104252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.104376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.104407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.104604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.104635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.104809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.104841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.104973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.105031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.105217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.570 [2024-12-15 05:37:05.105249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.570 qpair failed and we were unable to recover it. 00:36:51.570 [2024-12-15 05:37:05.105441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.105472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.105668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.105699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.105865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.105897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.106064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.106097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.106217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.106249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.106481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.106512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.106631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.106663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.106854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.106886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.107090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.107123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.107260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.107291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.107498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.107530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.107764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.107796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.108058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.108091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.108315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.108347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.108517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.108549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.108785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.108815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.109031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.109065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.109250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.109282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.109488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.109520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.109760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.109792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.109983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.110024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.110195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.110227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.110487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.110518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.110631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.110667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.110916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.110948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.111060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.111092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.111351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.111382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.111590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.111622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.111816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.111847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.112036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.112069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.112190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.112221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.112390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.112422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.112557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.112588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.112784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.112816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.571 qpair failed and we were unable to recover it. 00:36:51.571 [2024-12-15 05:37:05.113026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.571 [2024-12-15 05:37:05.113059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.113176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.113208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.113466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.113497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.113681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.113713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.113886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.113916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.114155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.114187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.114317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.114348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.114531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.114563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.114739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.114769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.114893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.114925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.115055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.115088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.115259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.115290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.115464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.115496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.115682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.115713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.115895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.115926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.116193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.116226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.116446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.116519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.116743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.116780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.116983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.117036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.117216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.117249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.117512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.117545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.117736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.117767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.117888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.117920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.118156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.118191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.118370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.118400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.118591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.118622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.118807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.118838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.119031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.119064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.119267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.119300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.119491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.119522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.119661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.119692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.119868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.119897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.572 [2024-12-15 05:37:05.120022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.572 [2024-12-15 05:37:05.120056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.572 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.120224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.120256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.120359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.120390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.120562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.120594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.120797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.120828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.120933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.120964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.121180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.121214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.121385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.121417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.121653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.121684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.121868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.121898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.122068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.122101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.122279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.122314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.122497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.122529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.122813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.122844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.122972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.123013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.123136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.123167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.123296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.123327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.123444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.123475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.123674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.123706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.123884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.123914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.124018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.124051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.124159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.124190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.124395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.124426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.124604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.124636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.124823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.124855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.125035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.125067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.125192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.125223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.125409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.125440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.125614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.125645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.125907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.125938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.126212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.126246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.126429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.126460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.126667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.126699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.126877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.126908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.127091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.127124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.127333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.127364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.127466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.127497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.127605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.573 [2024-12-15 05:37:05.127636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.573 qpair failed and we were unable to recover it. 00:36:51.573 [2024-12-15 05:37:05.127821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.127852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.127970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.128011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.128191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.128222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.128407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.128438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.128687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.128718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.128898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.128929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.129133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.129167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.129356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.129388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.129650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.129681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.129929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.129961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.130174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.130207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.130314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.130345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.130460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.130492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.130659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.130695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.130886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.130918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.131046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.131079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.131339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.131370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.131630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.131662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.131793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.131823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.132110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.132143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.132400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.132431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.132544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.132575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.132699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.132730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.132903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.132935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.574 qpair failed and we were unable to recover it. 00:36:51.574 [2024-12-15 05:37:05.133116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.574 [2024-12-15 05:37:05.133149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.133334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.133365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.133530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.133561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.133735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.133767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.133889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.133920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.134039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.134073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.134198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.134228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.134422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.134454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.134562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.134594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.134789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.134820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.134999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.135033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.135205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.135237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.135432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.135463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.135576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.135608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.135869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.135901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.136106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.136139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.136337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.136368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.136626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.136658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.136781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.136813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.137012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.137044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.137329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.137361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.137592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.137623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.137814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.137846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.138125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.138158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.138367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.138398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.138528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.575 [2024-12-15 05:37:05.138560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.575 qpair failed and we were unable to recover it. 00:36:51.575 [2024-12-15 05:37:05.138685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.138715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.138884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.138916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.139177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.139211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.139315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.139351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.139473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.139505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.139785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.139816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.140075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.140108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.140372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.140403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.140615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.140647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.140898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.140929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.141040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.141073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.141270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.141301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.141514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.141546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.141712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.141742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.141866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.141898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.142084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.142118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.142226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.142257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.142398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.142430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.142636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.142667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.142791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.142823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.143003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.143036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.143207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.143238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.143428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.143459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.143662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.143693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.143874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.143906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.144078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.144111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.144284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.144315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.144487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.144519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.144703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.576 [2024-12-15 05:37:05.144734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.576 qpair failed and we were unable to recover it. 00:36:51.576 [2024-12-15 05:37:05.144930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.144961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.145096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.145128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.145240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.145272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.145397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.145428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.145607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.145637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.145875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.145906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.146027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.146060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.146317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.146348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.146520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.146552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.146834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.146864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.147059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.147091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.147373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.147404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.147523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.147555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.147659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.147690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.147803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.147840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.148087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.148121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.148246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.148276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.148390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.148422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.148594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.148625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.148748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.148778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.148950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.148981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.149128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.149160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.149286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.149317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.149522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.149554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.149766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.149797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.150002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.577 [2024-12-15 05:37:05.150035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.577 qpair failed and we were unable to recover it. 00:36:51.577 [2024-12-15 05:37:05.150215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.150246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.150496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.150527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.150790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.150822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.151006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.151039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.151169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.151200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.151319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.151351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.151598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.151629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.151862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.151894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.152077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.152110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.152231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.152262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.152498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.152530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.152739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.152770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.153022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.153054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.153179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.153210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.153322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.153354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.153487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.153519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.153700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.153731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.153899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.153929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.154058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.154090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.154294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.154326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.154560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.154591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.154825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.154856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.155122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.155154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.155326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.155357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.155658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.155689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.155857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.578 [2024-12-15 05:37:05.155888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.578 qpair failed and we were unable to recover it. 00:36:51.578 [2024-12-15 05:37:05.156098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.156131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.156417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.156448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.156620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.156658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.156776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.156807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.156979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.157025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.157211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.157242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.157502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.157533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.157651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.157682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.157866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.157897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.158035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.158067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.158327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.158359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.158597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.158629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.158829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.158860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.159054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.159086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.159267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.159298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.159470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.159501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.159629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.159660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.159836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.159868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.159987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.160031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.160215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.160246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.160426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.160457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.160705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.160736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.160867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.160898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.161080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.161113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.161284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.161315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.161553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.579 [2024-12-15 05:37:05.161585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.579 qpair failed and we were unable to recover it. 00:36:51.579 [2024-12-15 05:37:05.161776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.161807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.161930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.161962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.162176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.162209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.162478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.162510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.162613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.162645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.162760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.162791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.163028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.163060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.163184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.163215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.163329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.163361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.163538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.163569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.163744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.163776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.163967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.164004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.164190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.164221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.164419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.164450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.164561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.164592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.164831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.164861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.165035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.165074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.165200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.165232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.165498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.165529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.165716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.165747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.165931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.165963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.166207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.166239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.166447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.166478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.166666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.166697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.166823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.166853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.166969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.167008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.167208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.167240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.167423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.167453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.167688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.167719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.167901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.167933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.168113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.168146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.168335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.168367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.168542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.168574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.168685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.168715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.580 [2024-12-15 05:37:05.168854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.580 [2024-12-15 05:37:05.168885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.580 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.169075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.169108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.169301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.169332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.169444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.169475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.169645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.169676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.169795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.169826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.169948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.169979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.170165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.170196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.170379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.170410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.170541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.170573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.170762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.170793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.170964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.171002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.171270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.171307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.171437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.171468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.171658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.171689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.171818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.171850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.172047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.172080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.172253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.172284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.172456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.172488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.172749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.172779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.172958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.172988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.173143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.173175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.173415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.173453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.173668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.173699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.173867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.173898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.174027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.174060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.174248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.174278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.174465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.174496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.174666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.174697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.174957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.174988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.175185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.175217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.175417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.581 [2024-12-15 05:37:05.175448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.581 qpair failed and we were unable to recover it. 00:36:51.581 [2024-12-15 05:37:05.175627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.175658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.175914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.175946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.176076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.176109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.176244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.176275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.176540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.176572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.176749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.176782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.177034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.177068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.177272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.177305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.177501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.177532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.177702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.177732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.177920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.177951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.178196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.178229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.178410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.178440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.178624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.178655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.178846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.178878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.179153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.179185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.179433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.179463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.179655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.179687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.179813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.179843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.180016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.180049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.180232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.180263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.180451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.180483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.180586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.180616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.180714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.180745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.180932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.180963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.181213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.181284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.181547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.181583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.181830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.181863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.182120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.182153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.182338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.182370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.182553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.182593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.182777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.182808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.183069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.183103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.183290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.183322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.183518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.183550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.183673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.183704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.183877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.582 [2024-12-15 05:37:05.183908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.582 qpair failed and we were unable to recover it. 00:36:51.582 [2024-12-15 05:37:05.184103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.184136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.184322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.184355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.184476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.184507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.184626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.184658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.184838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.184870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.185058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.185091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.185214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.185245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.185372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.185403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.185591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.185623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.185746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.185777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.186002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.186034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.186270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.186302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.186426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.186458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.186646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.186678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.186880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.186912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.187116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.187150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.187272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.187302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.187487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.187519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.187721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.187753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.187957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.187988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.188219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.188292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.188498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.188532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.188739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.188770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.189025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.189058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.189297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.189329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.189536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.189567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.189666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.189697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.189957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.189988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.190256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.583 [2024-12-15 05:37:05.190288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.583 qpair failed and we were unable to recover it. 00:36:51.583 [2024-12-15 05:37:05.190466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.190497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.190672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.190704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.190944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.190975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 542351 Killed "${NVMF_APP[@]}" "$@" 00:36:51.584 [2024-12-15 05:37:05.191105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.191139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.191390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.191421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.191682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.191715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b9 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:36:51.584 0 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.191977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.192018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:36:51.584 [2024-12-15 05:37:05.192139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.192171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:36:51.584 [2024-12-15 05:37:05.192448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.192481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.192692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.192726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:51.584 [2024-12-15 05:37:05.192897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.192929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.584 [2024-12-15 05:37:05.193167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.193201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.193464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.193497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.193760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.193791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.193980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.194022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.194351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.194420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.194688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.194757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.194907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.194943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.195164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.195199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.195410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.195442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.195635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.195666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.195909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.195941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.196235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.196268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.196504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.196535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.196776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.196808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.584 qpair failed and we were unable to recover it. 00:36:51.584 [2024-12-15 05:37:05.196936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.584 [2024-12-15 05:37:05.196967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.197232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.197263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.197507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.197540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.197744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.197785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.197922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.197956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.198094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.198128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.198304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.198336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.198525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.198557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.198689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.198721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.198902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.198934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.199144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.199179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.199377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.199408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.199586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.199618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@509 -- # nvmfpid=543067 00:36:51.585 [2024-12-15 05:37:05.199860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.199894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.200068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.200102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@510 -- # waitforlisten 543067 00:36:51.585 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:36:51.585 [2024-12-15 05:37:05.200360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.200402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 543067 ']' 00:36:51.585 [2024-12-15 05:37:05.200621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.200656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.200792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.200823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:36:51.585 [2024-12-15 05:37:05.200947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.200979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:36:51.585 [2024-12-15 05:37:05.201276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.201311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 [2024-12-15 05:37:05.201432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.201463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:36:51.585 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:36:51.585 [2024-12-15 05:37:05.201648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.201682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.585 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:36:51.585 [2024-12-15 05:37:05.201922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.585 [2024-12-15 05:37:05.201957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.585 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.202093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.586 [2024-12-15 05:37:05.202128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.202263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.202296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.202467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.202507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.202725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.202757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.203002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.203035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.203217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.203249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.203454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.203488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.203755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.203787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.203979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.204020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.204156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.204189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.204499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.204531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.204670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.204702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.204930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.204962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.205192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.205263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.205521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.586 [2024-12-15 05:37:05.205557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.586 qpair failed and we were unable to recover it. 00:36:51.586 [2024-12-15 05:37:05.205803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.205837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.206005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.206042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.206311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.206347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.206532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.206565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.206670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.206710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.206914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.206950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.207225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.207261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.207470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.207503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.207630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.207664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.207790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.207823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.208015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.208053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.208319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.208352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.208469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.208501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.208709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.208741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.208922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.208956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.209242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.209276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.209466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.209499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.209635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.209667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.209858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.209890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.210148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.210183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.210318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.210350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.210467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.210499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.210704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.210737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.210910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.210943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.211126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.211160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.211369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.211403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.211527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.211559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.211773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.211820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.876 qpair failed and we were unable to recover it. 00:36:51.876 [2024-12-15 05:37:05.211958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.876 [2024-12-15 05:37:05.211989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.212184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.212217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.212400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.212434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.212737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.212770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.212889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.212923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.213045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.213079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.213207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.213240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.213426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.213460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.213696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.213729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.214006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.214039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.214174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.214207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.214334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.214369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.214613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.214646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.214784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.214817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.215054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.215088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.215269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.215301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.215475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.215507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.215686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.215718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.215826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.215858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.216031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.216064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.216181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.216214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.216339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.216370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.216625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.216656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.216849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.216884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.217068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.217101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.217283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.217315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.217580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.217613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.217793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.217825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.218111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.218144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.218335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.218367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.218603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.218635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.218756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.218789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.219068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.219104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.219224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.219256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.219436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.219469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.219594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.219626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.219888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.219920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.220159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.220193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.220440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.220472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.220588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.220626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.220742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.220775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.220945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.220976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.221184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.221217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.221401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.221434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.221558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.221590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.221717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.221749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.221924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.221956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.222090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.222125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.222237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.222269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.222453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.222486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.222656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.222690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.222880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.222913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.223125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.223159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.223340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.223373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.223558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.877 [2024-12-15 05:37:05.223591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.877 qpair failed and we were unable to recover it. 00:36:51.877 [2024-12-15 05:37:05.223836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.223869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.223979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.224026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.224150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.224183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.224354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.224386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.224559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.224590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.224772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.224804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.224989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.225030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.225145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.225179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.225355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.225387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.225556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.225588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.225708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.225741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.225966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.226054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.226253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.226291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.226477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.226513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.226691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.226724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.226919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.226951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.227163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.227198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.227335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.227366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.227496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.227528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.227655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.227688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.227820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.227852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.228071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.228108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.228308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.228341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.228464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.228497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.228669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.228719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.228982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.229023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.229194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.229227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.229411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.229443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.229635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.229670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.229781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.229813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.230027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.230061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.230254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.230287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.230410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.230443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.230725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.230757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.230866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.230899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.231014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.231053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.231244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.231278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.231452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.231484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.231661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.231694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.231928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.231960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.232290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.232361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.232495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.232537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.232779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.232812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.233005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.233038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.233181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.233213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.233394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.233426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.233639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.233673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.233881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.233914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.234017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.234050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.234238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.234271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.234407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.234439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.234727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.234798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.235020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.235059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.235248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.235280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.235397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.235429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.235615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.235648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.235835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.235869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.236055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.236089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.236287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.236319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.878 [2024-12-15 05:37:05.236426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.878 [2024-12-15 05:37:05.236458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.878 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.236644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.236681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.236797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.236829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.237036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.237069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.237239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.237272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.237446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.237486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.237740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.237773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.238029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.238063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.238177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.238207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.238435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.238469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.238668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.238700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.238886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.238919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.239053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.239087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.239281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.239315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.239500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.239532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.239717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.239749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.239870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.239903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.240165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.240199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.240331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.240363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.240605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.240637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.240829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.240859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.241102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.241138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.241401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.241434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.241615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.241647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.241827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.241860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.242111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.242146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.242412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.242444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.242648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.242682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.242809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.242842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.243020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.243054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.243169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.243201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.243391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.243423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.243622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.243665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.243853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.243885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.244097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.244133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.244258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.244290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.244410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.244441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.244558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.244589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.244796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.244827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.245007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.245040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.245295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.245327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.245431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.245462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.245638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.245670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.245784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.245814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.245942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.245973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.246102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.246135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.246361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.246394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.246523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.246554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.246720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.246750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.246874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.246906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.247147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.247181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.247297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.247328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.247617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.247649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.247838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.247869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.248057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.879 [2024-12-15 05:37:05.248089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.879 qpair failed and we were unable to recover it. 00:36:51.879 [2024-12-15 05:37:05.248258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.248289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.248390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.248422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.248605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.248636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.248816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.248847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.249023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.249062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.249245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.249277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.249512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.249543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.249732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.249764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.250011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.250045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.250226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.250257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.250274] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:36:51.880 [2024-12-15 05:37:05.250326] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:36:51.880 [2024-12-15 05:37:05.250377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.250408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.250530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.250560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.250730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.250760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.250927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.250958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.251095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.251127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.251313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.251344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.251453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.251483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.251612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.251644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.251882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.251914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.252120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.252156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.252345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.252379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.252573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.252606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.252708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.252740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.252867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.252900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.253079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.253116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.253231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.253274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.253424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.253455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.253627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.253659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.253861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.253894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.254031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.254065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.254237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.254274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.254514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.254549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.254677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.254710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.254949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.254982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.255167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.255202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.255376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.255407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.255525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.255555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.255751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.255783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.255904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.255934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.256063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.256098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.256212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.256243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.256352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.256382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.256560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.256592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.256758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.256790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.256911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.256943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.257079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.257112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.257234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.257266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.257479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.257511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.257719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.257752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.257885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.257917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.258050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.258084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.258228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.258261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.258432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.258464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.258632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.258664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.258792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.258825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.259003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.259037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.259219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.259250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.259513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.259552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.259740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.259772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.259957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.259988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.880 [2024-12-15 05:37:05.260233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.880 [2024-12-15 05:37:05.260267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.880 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.260454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.260484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.260687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.260720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.260927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.260959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.261151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.261185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.261450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.261483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.261620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.261653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.261754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.261785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.261953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.261985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.262264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.262296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.262467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.262499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.262716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.262749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.262852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.262883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.263031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.263065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.263310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.263342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.263544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.263576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.263763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.263795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.264081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.264113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.264214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.264247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.264434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.264467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.264705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.264737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.264918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.264950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.265213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.265247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.265438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.265470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.265593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.265630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.265766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.265799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.265989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.266030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.266144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.266177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.266456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.266489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.266671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.266704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.266891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.266924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.267053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.267088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.267331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.267366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.267551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.267583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.267697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.267748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.267925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.267959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.268182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.268233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.268419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.268454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.268571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.268610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.268879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.268912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.269111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.269145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.269282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.269315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.269497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.269530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.269809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.269841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.270119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.270158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.270344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.270376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.270571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.270604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.270804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.270837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.271134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.271171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.271379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.271411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.271594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.271626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.271813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.271854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.272066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.272102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.272375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.272408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.272643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.272676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.272868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.272906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.273032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.273065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.273262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.273296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.273480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.273513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.273722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.273762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.273938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.273971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.274110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.274144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.881 [2024-12-15 05:37:05.274360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.881 [2024-12-15 05:37:05.274393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.881 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.274570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.274603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.274884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.274918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.275181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.275215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.275407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.275441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.275566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.275599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.275702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.275737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.275948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.275985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.276208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.276244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.276387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.276421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.276525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.276557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.276820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.276854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.276975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.277014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.277202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.277235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.277445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.277479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.277591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.277622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.277865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.277902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.278104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.278144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.278374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.278406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.278642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.278675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.278862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.278896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.279101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.279134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.279335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.279368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.279538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.279575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.279769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.279804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.279925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.279957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.280076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.280110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.280374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.280406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.280588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.280622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.280805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.280845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.281019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.281052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.281243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.281276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.281413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.281447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.281663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.281696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.281877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.281909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.282176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.282210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.282398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.282432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.282560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.282592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.282830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.282863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.283112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.283145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.283324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.283361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.283627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.283660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.283918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.283951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.284219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.284254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.284529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.284561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.284742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.284775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.284947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.284978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.285196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.285230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.285425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.285459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.285580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.285611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.285850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.285883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.286052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.286085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.286270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.286303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.286567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.286599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.286730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.286761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.286889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.286921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.287204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.882 [2024-12-15 05:37:05.287241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.882 qpair failed and we were unable to recover it. 00:36:51.882 [2024-12-15 05:37:05.287350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.287382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.287566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.287598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.287768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.287801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.287921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.287953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.288173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.288210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.288353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.288385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.288490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.288521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.288786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.288818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.289016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.289050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.289239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.289270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.289441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.289473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.289664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.289696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.289859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.289891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.290076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.290109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.290226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.290258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.290369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.290402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.290599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.290631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.290891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.290924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.291194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.291228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.291362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.291392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.291654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.291688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.291801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.291832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.292030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.292064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.292247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.292279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.292542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.292574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.292707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.292739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.292866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.292903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.293010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.293043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.293260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.293293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.293571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.293602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.293860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.293892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.294028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.294061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.294256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.294288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.294475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.294507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.294690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.294727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.294988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.295048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.295233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.295264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.295395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.295427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.295690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.295723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.295891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.295923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.296058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.296091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.296202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.296234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.296479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.296512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.296686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.296719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.296889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.296921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.297103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.297136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.297409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.297442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.297561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.297594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.297774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.297806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.297930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.297962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.298183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.298220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.298462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.298494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.298593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.298626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.298894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.298928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.299045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.299077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.299194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.299226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.299502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.299535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.299722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.299754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.300010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.300044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.300272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.300307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.300436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.300469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.883 [2024-12-15 05:37:05.300591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.883 [2024-12-15 05:37:05.300622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.883 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.300736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.300767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.300934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.300967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.301093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.301126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.301258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.301290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.301549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.301587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.301844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.301877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.302135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.302169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.302286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.302316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.302437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.302469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.302729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.302761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.302947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.302980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.303167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.303199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.303386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.303417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.303591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.303622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.303802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.303832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.304106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.304140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.304263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.304294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.304413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.304444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.304650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.304682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.304951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.304982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.305185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.305218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.305406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.305438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.305554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.305585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.305787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.305820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.306025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.306058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.306238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.306269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.306553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.306595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.306776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.306809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.307012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.307045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.307168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.307199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.307417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.307448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.307630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.307662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.307862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.307895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.308157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.308190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.308374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.308406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.308549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.308581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.308767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.308798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.308925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.308956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.309154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.309187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.309377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.309410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.309613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.309646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.309827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.309858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.310033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.310066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.310243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.310276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.310479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.310511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.310696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.310739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.310931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.310964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.311176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.311223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.311401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.311434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.311557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.311595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.311811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.311854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.312133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.312169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.312303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.312335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.312512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.312550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.312806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.312840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.312963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.313005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.313187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.313220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.313480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.313513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.313650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.313693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.313933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.313968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.314163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.314196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.314325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.884 [2024-12-15 05:37:05.314358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.884 qpair failed and we were unable to recover it. 00:36:51.884 [2024-12-15 05:37:05.314538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.314574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.314766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.314806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.314991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.315033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.315207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.315240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.315428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.315472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.315659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.315691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.315860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.315892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.316060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.316094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.316339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.316373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.316570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.316602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.316745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.316777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.316951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.316983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.317109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.317141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.317325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.317356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.317540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.317572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.317853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.317886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.318050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.318084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.318344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.318377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.318569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.318600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.318801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.318833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.319014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.319048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.319216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.319248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.319506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.319537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.319687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.319735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.319946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.319982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.320168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.320200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.320330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.320361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.320534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.320564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.320771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.320801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.321013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.321045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.321214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.321245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.321454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.321484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.321721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.321752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.321880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.321910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.322032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.322064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.322271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.322302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.322483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.322513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.322680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.322710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.322830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.322860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.323095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.323127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.323372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.323403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.323571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.323601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.323772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.323802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.323979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.324019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.324142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.324174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.324386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.324421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.324607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.324637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.324888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.324920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.325102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.325135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.325327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.325363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.325571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.325604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.325739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.325770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.325983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.326026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.326219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.326263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.326467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.326502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.326684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.326716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.885 qpair failed and we were unable to recover it. 00:36:51.885 [2024-12-15 05:37:05.326847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.885 [2024-12-15 05:37:05.326878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.327069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.327103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.327378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.327413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.327657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.327689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.327802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.327834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.328081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.328118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.328246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.328281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.328473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.328522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.328789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.328821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.328938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.328970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.329226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.329259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.329471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.329504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.329767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.329799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.329930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.329962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.330074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.330107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.330246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.330278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.330540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.330571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.330777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.330808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.331013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.331046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.331300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.331332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.331564] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:36:51.886 [2024-12-15 05:37:05.331568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.331604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.331815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.331846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.332027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.332059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.332264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.332295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.332519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.332550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.332824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.332855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.333053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.333086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.333285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.333317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.333499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.333531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.333711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.333743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.333922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.333955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.334108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.334142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.334270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.334302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.334511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.334543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.334820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.334852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.335037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.335070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.335256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.335289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.335532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.335563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.335668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.335700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.335833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.335865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.336064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.336097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.336268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.336300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.336413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.336445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.336629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.336661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.336832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.336864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.337046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.337080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.337281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.337313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.337507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.337540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.337657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.337690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.337874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.337907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.338083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.338116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.338322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.338354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.338525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.338557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.338681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.338715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.338901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.338933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.339123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.339157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.339293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.339326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.339501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.339532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.339800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.339833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.339961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.340001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.340173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.886 [2024-12-15 05:37:05.340214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.886 qpair failed and we were unable to recover it. 00:36:51.886 [2024-12-15 05:37:05.340492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.340524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.340704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.340736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.340927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.340960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.341150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.341185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.341320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.341354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.341619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.341654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.341845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.341877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.342060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.342094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.342341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.342376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.342670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.342702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.342885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.342917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.343129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.343162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.343286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.343317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.343508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.343539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.343736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.343768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.343962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.344001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.344210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.344241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.344423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.344455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.344717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.344749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.344982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.345024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.345233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.345264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.345560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.345592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.345801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.345832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.346093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.346126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.346378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.346410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.346548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.346579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.346786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.346817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.347099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.347132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.347247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.347279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.347451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.347483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.347673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.347704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.347834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.347866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.348034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.348095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.348295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.348325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.348495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.348526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.348783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.348815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.348998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.349031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.349304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.349336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.349599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.349630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.349817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.349855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.349988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.350031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.350225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.350256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.350514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.350545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.350789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.350821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.351049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.351084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.351193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.351227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.351492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.351527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.351779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.351812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.352051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.352085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.352287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.352318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.352501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.352535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.352793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.352827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.353069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.353104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.353307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.353340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.353523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.353556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.353739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.353769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.354029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.354062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.887 [2024-12-15 05:37:05.354062] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:36:51.887 qpair failed and we were unable to recover it. 00:36:51.887 [2024-12-15 05:37:05.354096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:36:51.887 [2024-12-15 05:37:05.354103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:36:51.887 [2024-12-15 05:37:05.354110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:36:51.887 [2024-12-15 05:37:05.354115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:36:51.887 [2024-12-15 05:37:05.354247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.887 [2024-12-15 05:37:05.354279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.354470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.354501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.354759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.354791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.355004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.355037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.355158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.355190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.355374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.355406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.355537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.355569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.355624] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:36:51.888 [2024-12-15 05:37:05.355751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.355784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.355731] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:36:51.888 [2024-12-15 05:37:05.355837] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:36:51.888 [2024-12-15 05:37:05.355839] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:36:51.888 [2024-12-15 05:37:05.355954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.355985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.356105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.356135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.356342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.356374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.356570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.356601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.356885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.356917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.357158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.357192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.357404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.357437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.357640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.357672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.357933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.357965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.358167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.358199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.358446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.358477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.358619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.358668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.358901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.358944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.359199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.359232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.359515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.359548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.359835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.359868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.360013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.360047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.360167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.360200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.360384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.360417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.360555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.360588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.360856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.360889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.361151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.361186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.361402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.361434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.361568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.361601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.361805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.361845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.362095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.362128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.362252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.362284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.362472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.362504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.362675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.362707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.362820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.362853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.362962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.363001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.363137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.363170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.363354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.363386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.363509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.363542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.363715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.363748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.363934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.363966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.364106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.364139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.364309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.364342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.364618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.364651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.364887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.364920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.365045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.365079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.365197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.365230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.365404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.365436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.365551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.365583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.365847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.365880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.366127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.366161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.366357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.366390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.366686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.366719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.366843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.366875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.367046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.367080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.888 qpair failed and we were unable to recover it. 00:36:51.888 [2024-12-15 05:37:05.367261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.888 [2024-12-15 05:37:05.367295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.367491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.367532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.367716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.367749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.367959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.368002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.368127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.368159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.368357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.368390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.368499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.368531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.368716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.368749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.368938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.368971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.369107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.369140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.369328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.369363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.369476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.369509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.369701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.369736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.369905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.369938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.370135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.370169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.370362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.370395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.370689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.370723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.370860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.370893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.371084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.371120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.371370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.371403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.371591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.371627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.371807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.371840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.372043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.372076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.372248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.372281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.372413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.372447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.372555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.372588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.372858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.372893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.373155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.373190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.373324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.373362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.373491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.373523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.373718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.373753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.374041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.374077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.374261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.374295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.374477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.374510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.374787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.374822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.375007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.375043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.375296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.375330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.375604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.375638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.375821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.375856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.376046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.376081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.376288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.376322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.376561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.376595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.376832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.376866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.377068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.377103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.377306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.377339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.377454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.377486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.377680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.377713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.377889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.377923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.378036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.378068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.378313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.378346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.378532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.378564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.378761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.378794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.378925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.378958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.379165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.379200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.379388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.379420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.379606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.379644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.379769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.379801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.380042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.380077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.380355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.380389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.380576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.380610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.380854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.380887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.381021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.889 [2024-12-15 05:37:05.381056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.889 qpair failed and we were unable to recover it. 00:36:51.889 [2024-12-15 05:37:05.381315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.381350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.381600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.381633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.381841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.381874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.382113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.382148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.382331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.382363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.382493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.382526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.382641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.382673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.382798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.382832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.383097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.383132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.383317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.383348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.383488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.383520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.383701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.383734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.383915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.383947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.384175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.384211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.384411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.384445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.384572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.384604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.384854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.384887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.385021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.385054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.385292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.385323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.385592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.385625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.385823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.385855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.386042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.386075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.386281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.386311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.386483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.386515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.386643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.386676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.386847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.386879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.387054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.387087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.387350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.387383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.387569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.387601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.387789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.387823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.388058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.388092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.388273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.388305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.388495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.388529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.388702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.388735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.388879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.388940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.389073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.389117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.389303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.389336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.389536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.389568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.389689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.389722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.389909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.389942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.390197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.390231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.390347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.390380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.390569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.390601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.390792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.390825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.391013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.391047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.391237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.391270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.391444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.391476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.391656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.391697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.391955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.391988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.392125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.392158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.392268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.392300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.392584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.392616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.392809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.392841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.393096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.393130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.393309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.393342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.393605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.393638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.890 qpair failed and we were unable to recover it. 00:36:51.890 [2024-12-15 05:37:05.393820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.890 [2024-12-15 05:37:05.393853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.393974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.394014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.394196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.394228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.394360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.394392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.394632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.394665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.394879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.394912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.395082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.395116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.395363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.395396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.395644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.395676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.395912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.395945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.396057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.396091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.396276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.396308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.396595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.396628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.396814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.396848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.396973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.397017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.397259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.397292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.397477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.397511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.397692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.397724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.397944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.397982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.398117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.398149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.398338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.398370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.398555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.398588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.398827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.398859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.399058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.399091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.399207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.399239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.399497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.399528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.399709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.399742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.399925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.399958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.400155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.400188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.400370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.400403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.400643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.400677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.400868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.400910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.401117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.401151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.401410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.401442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.401545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.401577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.401815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.401848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.401966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.402005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.402248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.402281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.402491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.402523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.402752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.402784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.402887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.402921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.403158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.403193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.403326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.403358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.403525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.403559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.403683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.403716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.403899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.403933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.404185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.404220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.404403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.404437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.404615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.404647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.404827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.404860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.405048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.405083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.405255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.405287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.405460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.405492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.405664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.405697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.405812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.405844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.406080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.406114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.406288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.406322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.406506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.406538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.406881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.406937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.407144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.407180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.407302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.407334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.891 qpair failed and we were unable to recover it. 00:36:51.891 [2024-12-15 05:37:05.407521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.891 [2024-12-15 05:37:05.407554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.407740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.407772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.407960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.408004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.408125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.408157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.408279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.408310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.408553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.408586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.408772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.408803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.408920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.408951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.409128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.409163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.409374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.409406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.409641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.409682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.409851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.409883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.410092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.410126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.410372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.410405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.410661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.410694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.410878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.410911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.411164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.411198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.411335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.411368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.411535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.411567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.411820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.411853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.412039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.412073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.412322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.412356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.412467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.412500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.412708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.412741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.412933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.412966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.413092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.413125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.413339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.413371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.413563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.413595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.413876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.413909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.414133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.414166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.414455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.414486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.414675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.414707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.414827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.414859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.414979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.415020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.415285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.415317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.415506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.415538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.415731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.415763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.415875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.415914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.416097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.416131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.416304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.416336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.416450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.416483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.416689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.416722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.416971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.417013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.417130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.417164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.417333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.417365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.417480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.417512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.417752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.417783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.417885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.417917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.418093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.418127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.418251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.418283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.418451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.418483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.418697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.418729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.418906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.418938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.419076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.419109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.419308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.419340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.419514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.419546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.419809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.419840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.419945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.419977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.420119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.420152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.420274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.892 [2024-12-15 05:37:05.420306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.892 qpair failed and we were unable to recover it. 00:36:51.892 [2024-12-15 05:37:05.420426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.420459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.420694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.420725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.420911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.420943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.421079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.421114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.421293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.421325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.421446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.421478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.421591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.421623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.421729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.421761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.421954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.421985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.422107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.422139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.422354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.422386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.422599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.422631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.422747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.422778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.422953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.422986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.423193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.423226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.423418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.423449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.423585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.423617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.423792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.423830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.424016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.424049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.424154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.424187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.424304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.424337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.424508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.424539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.424776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.424808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.425043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.425077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.425355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.425387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.425625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.425657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.425788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.425819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.426010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.426043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.426225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.426257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.426423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.426456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.426665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.426697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.426949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.426980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.427169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.427202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.427387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.427418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.427611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.427643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.427847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.427878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.428065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.428100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.428272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.428304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.428482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.428514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.428703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.428735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.428836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.428869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.428983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.429032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.429141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.429173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.429382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.429413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.429594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.429627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.429761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.429793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.429905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.429936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.430206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.430240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.430342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.430374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.430486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.430518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.430686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.430718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.430980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.431021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.431188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.431220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.893 [2024-12-15 05:37:05.431413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.893 [2024-12-15 05:37:05.431445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.893 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.431696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.431729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.431938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.431970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.432111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.432144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.432343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.432382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.432628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.432660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.432785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.432816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.433014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.433048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.433332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.433364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.433485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.433517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.433755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.433787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.433956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.433988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.434273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.434305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.434490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.434522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.434706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.434738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.435024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.435058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.435244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.435276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.435484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.435516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.435637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.435669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.435850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.435882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.436053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.436087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.436320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.436353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.436450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.436482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.436667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.436699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.436831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.436863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.437048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.437082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.437330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.437362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.437622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.437654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.437789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.437821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.438007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.438041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.438253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.438285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.438472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.438504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.438672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.438704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.438892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.438924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.439101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.439134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.439251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.439283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.439530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.439562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.439801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.439833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.440072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.440105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.440236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.440268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.440402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.440434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.440605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.440637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.440872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.440905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.441026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.441059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.441365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.441403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.441505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.441537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.441722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.441754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.441881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.441913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.442029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.442063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.442235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.442267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.442385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.442418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.442627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.442659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.442778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.442809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.442980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.443020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.443204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.443236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.443336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.443368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:36:51.894 [2024-12-15 05:37:05.443545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.443579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.443682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.443719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:36:51.894 [2024-12-15 05:37:05.443920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.443952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 [2024-12-15 05:37:05.444141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.444175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.894 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:36:51.894 [2024-12-15 05:37:05.444288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.894 [2024-12-15 05:37:05.444321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.894 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.444540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:51.895 [2024-12-15 05:37:05.444573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.444690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.444722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.895 [2024-12-15 05:37:05.444892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.444924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.445115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.445149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.445335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.445368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.445559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.445591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.445772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.445804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.445906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.445938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.446144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.446177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.446347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.446380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.446561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.446594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.446724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.446756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.446886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.446918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.447022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.447056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.447184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.447216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.447406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.447438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.447615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.447648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.447833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.447866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.448066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.448099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.448212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.448245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.448500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.448535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.448712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.448756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.448874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.448908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.449177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.449212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.449323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.449356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.449524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.449558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.449761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.449798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.449934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.449966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.450081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.450113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.450232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.450265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.450394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.450426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.450565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.450603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.450849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.450888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.451081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.451114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.451297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.451329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.451507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.451547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.451669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.451705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.451836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.451868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.451971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.452011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.452320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.452357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.452550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.452584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.452725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.452757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.453018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.453052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.453227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.453259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.453362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.453394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.453577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.453608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.453776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.453808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.453918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.453949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.454069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.454103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.454223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.454255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.454360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.454392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.454568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.454603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.454785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.454820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.455028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.455061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.455264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.455297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.455425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.455458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.455591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.455623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.455736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.455769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.455899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.455932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.456040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.895 [2024-12-15 05:37:05.456076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.895 qpair failed and we were unable to recover it. 00:36:51.895 [2024-12-15 05:37:05.456181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.456217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.456423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.456465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.456650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.456683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.456870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.456905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.457034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.457070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.457254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.457287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.457481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.457514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.457617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.457650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.457826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.457860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.457967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.458008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.458265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.458301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.458441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.458473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.458649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.458683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.458868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.458900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.459032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.459066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.459360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.459393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.459593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.459625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.459748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.459781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.459902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.459933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.460106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.460140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.460340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.460378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.460494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.460526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.460644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.460678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.460785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.460817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.460945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.460978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.461125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.461158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.461290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.461333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.461462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.461494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.461617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.461650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.461752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.461784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.462006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.462042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.462244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.462281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.462471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.462507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.462612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.462644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.463634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.463691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.463895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.463933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.464123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.464167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.464275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.464310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.464437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.464470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.464660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.464692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.464818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.464850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.464969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.465047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.465254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.465289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.465412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.465444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.465557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.465588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.465711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.465746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.465871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.465905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.466023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.466069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.466195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.466230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.466446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.466479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.466600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.466634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.466827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.466860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.467007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.467044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.467168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.467202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.896 [2024-12-15 05:37:05.467326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.896 [2024-12-15 05:37:05.467358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.896 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.467475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.467507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.467617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.467649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.467835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.467885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.468020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.468054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.468244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.468281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.468395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.468424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.468537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.468566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.468681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.468710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.468830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.468860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.468972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.469016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.469200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.469231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.469403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.469433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.469543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.469575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.469752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.469787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.469911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.469946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.470106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.470139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.470252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.470282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.470378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.470407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.470596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.470628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.470746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.470779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.470903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.470932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.471061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.471092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.471211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.471241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.471353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.471383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.471558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.471594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.471712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.471745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.471844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.471879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.471987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.472025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.472140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.472170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.472297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.472328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.472429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.472459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.472632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.472661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.472774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.472804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.472907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.472937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.473049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.473080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.473332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.473362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.473466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.473495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.473601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.473632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.473745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.473775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.473955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.473985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.474189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.474218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.474324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.474353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.474468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.474498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.474609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.474637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.474736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.474766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.474942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.474973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.475157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.475186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.475279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.475308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.475421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.475452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.475556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.475586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.475713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.475752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.475868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.475898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.476018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.476049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.476156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.476188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.476294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.476324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.476428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.476463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.476630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.897 [2024-12-15 05:37:05.476662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.897 qpair failed and we were unable to recover it. 00:36:51.897 [2024-12-15 05:37:05.476784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.476815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.476981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.477024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.477138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.477169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.477287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.477321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.477449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.477479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.477591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.477623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.477735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.477766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.477881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.477910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.478016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.478049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.478155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.478201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.478295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.478322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.478419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.478446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.478538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.478568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.478695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.478723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.478840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.478867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.478974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.479011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.479121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.479149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.479249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.479276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.479379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.479413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.479585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.479614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.479730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.479757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.479857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.479884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.480045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.480072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.480170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.480197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.480306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.480333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.480424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.480460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.480578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.480608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.480771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.480797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.480967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.481004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:36:51.898 [2024-12-15 05:37:05.481111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.481140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.481236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.481267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.481369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.481397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:36:51.898 [2024-12-15 05:37:05.481516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.481547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.481710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.481737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.898 [2024-12-15 05:37:05.481841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.481879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.481983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.482022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.482122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.898 [2024-12-15 05:37:05.482150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.482246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.482274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.482372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.482401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.482498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.482527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.482658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.482685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.482786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.482820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.482923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.482949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.483058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.483091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.483190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.483218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.483314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.483341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.483507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.483536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.483639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.483671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.483830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.483857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.483953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.483979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.484095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.484122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.484226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.484253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.484415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.484443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.484534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.484561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.484667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.484694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.484860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.484888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.485015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.485044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.485143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.485170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.485268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.485295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.898 [2024-12-15 05:37:05.485399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.898 [2024-12-15 05:37:05.485426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.898 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.485530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.485557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.485659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.485686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.485785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.485812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.485902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.485930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.486043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.486072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.486179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.486205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.486384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.486412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.486572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.486600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.486699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.486726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.486908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.486936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.487056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.487085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.487279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.487307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.487400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.487427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.487518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.487544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.487711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.487784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.487985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.488040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.488217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.488250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.488375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.488408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.488514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.488545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.488657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.488689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.488863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.488895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.489067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.489101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.489204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.489235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.489340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.489373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.489474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.489505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.489630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.489663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.489780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.489815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.489907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.489946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.490055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.490086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.490179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.490203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.490291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.490317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.490414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.490441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.490528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.490552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.490712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.490739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.490828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.490854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.490954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.490983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.491091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.491119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.491289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.491314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.491406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.491437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.491533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.491557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.491645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.491670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.491775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.491802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.491966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.492022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.492118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.492143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.492251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.492276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.492435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.492461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.492549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.492574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.492667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.492693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.492786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.492811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.492978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.493017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.493200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.493225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.493318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.493344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.493433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.493458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.493552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.493577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.493686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.493720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.493822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.899 [2024-12-15 05:37:05.493854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.899 qpair failed and we were unable to recover it. 00:36:51.899 [2024-12-15 05:37:05.493963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.494003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.494111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.494142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.494248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.494286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.494384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.494416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.494518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.494561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.494664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.494696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.494793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.494825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.494948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.494973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.495143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.495175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.495338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.495363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.495454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.495479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.495567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.495598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.495700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.495727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.495837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.495869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.495987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.496021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.496123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.496148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.496260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.496284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.496443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.496469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.496575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.496601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.496692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.496722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.496839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.496865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.496971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.497007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.497175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.497204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.497296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.497332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.497429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.497458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.497553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.497577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.497670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.497695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.497794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.497819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.497911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.497935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.498031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.498058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.498147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.498172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.498394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.498420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.498528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.498558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.498668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.498697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.498806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.498835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.499070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.499100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.499294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.499324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.499427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.499457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.499635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.499669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.499787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.499818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.499936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.499968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.500152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.500186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.500305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.500337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.500533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.500566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.500668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.500700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.500888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.500919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.501033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.501066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.501192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.501225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.501399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.501430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.501597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.501629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.501798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.501831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.502012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.502055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.502229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.502261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.502475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.502508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.502641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.502672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.502922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.502954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.503089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.503122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.503242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.503273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.503514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.503546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.503718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.503750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.503939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.900 [2024-12-15 05:37:05.503970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.900 qpair failed and we were unable to recover it. 00:36:51.900 [2024-12-15 05:37:05.504087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.504120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.504254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.504285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.504450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.504481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.504652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.504685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.504906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.504937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.505117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.505149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.505392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.505425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.505597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.505627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.505791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.505823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.505959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.506001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.506127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.506158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.506273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.506304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.506468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.506500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.506606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.506636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.506805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.506836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.506947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.506979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.507099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.507131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.507242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.507279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.507395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.507425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.507551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.507582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.507684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.507713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.507905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.507935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.508049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.508089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.508231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.508262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.508358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.508388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.508513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.508542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.508791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.508821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.509064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.509098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.509338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.509378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.509492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.509523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.509642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.509685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.509793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.509826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.510043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.510079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.510296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.510329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.510438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.510470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.510593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.510625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.510812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.510844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.511080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.511114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.511233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.511265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.511367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.511398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.511516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.511548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.511738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.511770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.511900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.511932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.512113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.512147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.512270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.512302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.512405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.512438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.512638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.512670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.512789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.512821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.513011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.513044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.513307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.513339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.513522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.513556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.513749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.513786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.513982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.514025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.514147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.514180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.514382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.514414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.514527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.514560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.514761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.514796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.514984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.515030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.515230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.515262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.515445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.515477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.901 [2024-12-15 05:37:05.515658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.901 [2024-12-15 05:37:05.515690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.901 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.515802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.515834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.515940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.515971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.516221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.516254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.516426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.516459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.516641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.516672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.516942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.516975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.517089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.517121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.517330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.517362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.517530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.517561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.517678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.517716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.517828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.517859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.517962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.518004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.518192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.518224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.518422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.518453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.518633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.518664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.518844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.518876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.519112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.519145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.519335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.519367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.519538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.519569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.519683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.519714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.519821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.519852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.520047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.520080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 Malloc0 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.520288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.520321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.520613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.520645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.520810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:51.902 [2024-12-15 05:37:05.520842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.520971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.521010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.521127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.521158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:36:51.902 [2024-12-15 05:37:05.521276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.521307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.521497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.521528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:51.902 [2024-12-15 05:37:05.521670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.521701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:51.902 [2024-12-15 05:37:05.521904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.521937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.522135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.522167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.522406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.522438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.522622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.522653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.522788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.522824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.522945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.522976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.523095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.523127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.523318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.523349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.523475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.523506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.523686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.523717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.523952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.523983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.524201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.524233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.524400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.524432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.524677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.524709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.524941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.524972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.525180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.525213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.525404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.525435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.525554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.525585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.525827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.525859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.525988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.526030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.526199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.526231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.526423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.526455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.526577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.526609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.526714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.526745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.902 [2024-12-15 05:37:05.526923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.902 [2024-12-15 05:37:05.526955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.902 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.527098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.527130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.527314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.527346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.527519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.527549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.527649] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:36:51.903 [2024-12-15 05:37:05.527668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.527699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.527871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.527903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.528086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.528118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.528370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.528403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.528676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.528707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.528898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.528929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.529126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.529160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.529401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.529432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.529618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.529650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.529764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.529796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.529975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.530016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.530193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.530227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.530425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.530459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.530570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.530602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.530859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.530891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.531060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.531093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.531151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x214a5e0 (9): Bad file descriptor 00:36:51.903 [2024-12-15 05:37:05.531351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.531406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa22c000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.531628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.531669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.531797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.531834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.532029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.532064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.532240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.532271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.532383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.532419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.532549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.532593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.532740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.532772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.532905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.532939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.533181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.533213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.533488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.533521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.533644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.533682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.533936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.533967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.534129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.534165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.534363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.534399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.534577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.534612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.534718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.534750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.534929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.534964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:51.903 [2024-12-15 05:37:05.535236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:51.903 [2024-12-15 05:37:05.535276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:51.903 qpair failed and we were unable to recover it. 00:36:52.166 [2024-12-15 05:37:05.535519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.166 [2024-12-15 05:37:05.535552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.166 qpair failed and we were unable to recover it. 00:36:52.166 [2024-12-15 05:37:05.535750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.166 [2024-12-15 05:37:05.535782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.166 qpair failed and we were unable to recover it. 00:36:52.166 [2024-12-15 05:37:05.535892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.166 [2024-12-15 05:37:05.535923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.166 qpair failed and we were unable to recover it. 00:36:52.166 [2024-12-15 05:37:05.536134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.166 [2024-12-15 05:37:05.536168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.166 qpair failed and we were unable to recover it. 00:36:52.166 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.166 [2024-12-15 05:37:05.536413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.166 [2024-12-15 05:37:05.536446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.166 qpair failed and we were unable to recover it. 00:36:52.166 [2024-12-15 05:37:05.536620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.166 [2024-12-15 05:37:05.536652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.166 qpair failed and we were unable to recover it. 00:36:52.166 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:36:52.166 [2024-12-15 05:37:05.536891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.166 [2024-12-15 05:37:05.536930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.166 qpair failed and we were unable to recover it. 00:36:52.166 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.166 [2024-12-15 05:37:05.537167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.166 [2024-12-15 05:37:05.537200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.166 qpair failed and we were unable to recover it. 00:36:52.166 [2024-12-15 05:37:05.537392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.166 [2024-12-15 05:37:05.537425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b9 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:52.166 0 with addr=10.0.0.2, port=4420 00:36:52.166 qpair failed and we were unable to recover it. 00:36:52.166 [2024-12-15 05:37:05.537643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.537675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.537881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.537912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.538121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.538155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.538288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.538321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.538511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.538543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.538690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.538722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.538825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.538856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.539042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.539075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.539229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.539263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.539394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.539436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.539636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.539672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.539862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.539896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.540073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.540110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.540339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.540380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.540505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.540544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.540691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.540731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.540869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.540901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.541105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.541144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.541281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.541314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.541430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.541462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.541646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.541679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.541918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.541950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.542134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.542169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.542489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.542527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.542716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.542750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.542872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.542905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.543034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.543067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.543175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.543207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.543441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.543473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.543647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.543679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.543919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.543950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.544091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.544125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.167 [2024-12-15 05:37:05.544397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.544429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.544672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.544703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:36:52.167 [2024-12-15 05:37:05.544977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.545020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.167 [2024-12-15 05:37:05.545134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.545166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.545348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.545381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:52.167 [2024-12-15 05:37:05.545490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.545522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.545705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.545737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.545932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.545964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa228000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.546187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.546240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x213c6a0 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.546495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.546531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.546710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.546742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.546932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.546963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.547145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.547178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.547363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.547394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.547655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.547688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.547814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.547846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.548025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.548067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.548237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.548269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.167 [2024-12-15 05:37:05.548450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.167 [2024-12-15 05:37:05.548482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.167 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.548666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.548698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.548871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.548903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.549103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.549136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.549417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.549449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.549575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.549606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.549812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.549844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.549982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.550028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.550153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.550186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.550373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.550405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.550642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.550673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.550878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.550916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.551086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.551120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.551365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.551396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.551566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.551598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.551784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.551816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.552010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.552044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.552295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.552327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.168 [2024-12-15 05:37:05.552583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.552614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.552733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.552765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:36:52.168 [2024-12-15 05:37:05.553046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.553079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.168 [2024-12-15 05:37:05.553260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.553292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:52.168 [2024-12-15 05:37:05.553476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.553507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.553685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.553716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.553978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.554032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.554213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.554245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.554441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.554476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.554601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.554633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.554868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.554910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.555204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.555241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.555412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.555447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.555708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:36:52.168 [2024-12-15 05:37:05.555742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fa234000b90 with addr=10.0.0.2, port=4420 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.555858] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:36:52.168 [2024-12-15 05:37:05.558331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.168 [2024-12-15 05:37:05.558440] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.168 [2024-12-15 05:37:05.558486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.168 [2024-12-15 05:37:05.558509] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.168 [2024-12-15 05:37:05.558529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.168 [2024-12-15 05:37:05.558580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.168 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:36:52.168 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:36:52.168 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:52.168 [2024-12-15 05:37:05.568221] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.168 [2024-12-15 05:37:05.568317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.168 [2024-12-15 05:37:05.568355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.168 [2024-12-15 05:37:05.568376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.168 [2024-12-15 05:37:05.568396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.168 [2024-12-15 05:37:05.568441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:36:52.168 05:37:05 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 542377 00:36:52.168 [2024-12-15 05:37:05.578223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.168 [2024-12-15 05:37:05.578303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.168 [2024-12-15 05:37:05.578330] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.168 [2024-12-15 05:37:05.578344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.168 [2024-12-15 05:37:05.578357] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.168 [2024-12-15 05:37:05.578387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.588246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.168 [2024-12-15 05:37:05.588316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.168 [2024-12-15 05:37:05.588333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.168 [2024-12-15 05:37:05.588342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.168 [2024-12-15 05:37:05.588351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.168 [2024-12-15 05:37:05.588372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.598204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.168 [2024-12-15 05:37:05.598266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.168 [2024-12-15 05:37:05.598279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.168 [2024-12-15 05:37:05.598286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.168 [2024-12-15 05:37:05.598296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.168 [2024-12-15 05:37:05.598311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.608222] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.168 [2024-12-15 05:37:05.608300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.168 [2024-12-15 05:37:05.608313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.168 [2024-12-15 05:37:05.608319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.168 [2024-12-15 05:37:05.608325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.168 [2024-12-15 05:37:05.608340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.168 qpair failed and we were unable to recover it. 00:36:52.168 [2024-12-15 05:37:05.618245] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.168 [2024-12-15 05:37:05.618343] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.168 [2024-12-15 05:37:05.618356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.168 [2024-12-15 05:37:05.618362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.618368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.618382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.628294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.628351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.628364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.628370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.628376] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.628390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.638297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.638353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.638366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.638373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.638379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.638393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.648353] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.648408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.648421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.648427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.648433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.648447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.658366] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.658420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.658432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.658438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.658444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.658459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.668349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.668420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.668433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.668439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.668445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.668459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.678473] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.678531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.678543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.678549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.678555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.678569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.688430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.688478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.688493] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.688499] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.688505] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.688520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.698453] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.698529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.698542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.698548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.698555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.698569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.708500] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.708555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.708570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.708577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.708583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.708598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.718525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.718583] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.718596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.718602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.718608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.718623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.728539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.728613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.728626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.728632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.728644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.728658] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.738612] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.738710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.738723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.738729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.738735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.738749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.748622] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.748679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.748691] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.748698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.748704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.748718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.758656] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.758720] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.758732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.758738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.758744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.758759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.768664] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.768717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.768730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.768737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.768743] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.768757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.778689] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.778738] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.778751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.778757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.778763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.169 [2024-12-15 05:37:05.778778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.169 qpair failed and we were unable to recover it. 00:36:52.169 [2024-12-15 05:37:05.788774] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.169 [2024-12-15 05:37:05.788832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.169 [2024-12-15 05:37:05.788844] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.169 [2024-12-15 05:37:05.788851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.169 [2024-12-15 05:37:05.788857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.170 [2024-12-15 05:37:05.788871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.170 qpair failed and we were unable to recover it. 00:36:52.170 [2024-12-15 05:37:05.798787] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.170 [2024-12-15 05:37:05.798844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.170 [2024-12-15 05:37:05.798857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.170 [2024-12-15 05:37:05.798864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.170 [2024-12-15 05:37:05.798870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.170 [2024-12-15 05:37:05.798884] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.170 qpair failed and we were unable to recover it. 00:36:52.170 [2024-12-15 05:37:05.808788] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.170 [2024-12-15 05:37:05.808841] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.170 [2024-12-15 05:37:05.808854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.170 [2024-12-15 05:37:05.808861] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.170 [2024-12-15 05:37:05.808867] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.170 [2024-12-15 05:37:05.808882] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.170 qpair failed and we were unable to recover it. 00:36:52.170 [2024-12-15 05:37:05.818843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.170 [2024-12-15 05:37:05.818898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.170 [2024-12-15 05:37:05.818912] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.170 [2024-12-15 05:37:05.818918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.170 [2024-12-15 05:37:05.818924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.170 [2024-12-15 05:37:05.818939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.170 qpair failed and we were unable to recover it. 00:36:52.170 [2024-12-15 05:37:05.828843] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.170 [2024-12-15 05:37:05.828901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.170 [2024-12-15 05:37:05.828914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.170 [2024-12-15 05:37:05.828921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.170 [2024-12-15 05:37:05.828927] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.170 [2024-12-15 05:37:05.828941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.170 qpair failed and we were unable to recover it. 00:36:52.170 [2024-12-15 05:37:05.838814] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.170 [2024-12-15 05:37:05.838872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.170 [2024-12-15 05:37:05.838885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.170 [2024-12-15 05:37:05.838891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.170 [2024-12-15 05:37:05.838897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.170 [2024-12-15 05:37:05.838911] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.170 qpair failed and we were unable to recover it. 00:36:52.170 [2024-12-15 05:37:05.848905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.170 [2024-12-15 05:37:05.848960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.170 [2024-12-15 05:37:05.848974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.170 [2024-12-15 05:37:05.848980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.170 [2024-12-15 05:37:05.848986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.170 [2024-12-15 05:37:05.849005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.170 qpair failed and we were unable to recover it. 00:36:52.431 [2024-12-15 05:37:05.858949] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.431 [2024-12-15 05:37:05.859007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.431 [2024-12-15 05:37:05.859020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.431 [2024-12-15 05:37:05.859031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.431 [2024-12-15 05:37:05.859037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.431 [2024-12-15 05:37:05.859052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.431 qpair failed and we were unable to recover it. 00:36:52.431 [2024-12-15 05:37:05.869031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.431 [2024-12-15 05:37:05.869087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.431 [2024-12-15 05:37:05.869101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.431 [2024-12-15 05:37:05.869107] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.431 [2024-12-15 05:37:05.869113] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.431 [2024-12-15 05:37:05.869127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.431 qpair failed and we were unable to recover it. 00:36:52.431 [2024-12-15 05:37:05.878956] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.431 [2024-12-15 05:37:05.879018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.431 [2024-12-15 05:37:05.879032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.431 [2024-12-15 05:37:05.879038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.431 [2024-12-15 05:37:05.879044] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.431 [2024-12-15 05:37:05.879059] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.431 qpair failed and we were unable to recover it. 00:36:52.431 [2024-12-15 05:37:05.889029] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.431 [2024-12-15 05:37:05.889082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.431 [2024-12-15 05:37:05.889095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.431 [2024-12-15 05:37:05.889101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.431 [2024-12-15 05:37:05.889107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.431 [2024-12-15 05:37:05.889122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.431 qpair failed and we were unable to recover it. 00:36:52.431 [2024-12-15 05:37:05.899051] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.431 [2024-12-15 05:37:05.899108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.431 [2024-12-15 05:37:05.899120] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.431 [2024-12-15 05:37:05.899127] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.431 [2024-12-15 05:37:05.899133] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.431 [2024-12-15 05:37:05.899151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.431 qpair failed and we were unable to recover it. 00:36:52.431 [2024-12-15 05:37:05.909087] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.431 [2024-12-15 05:37:05.909147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.431 [2024-12-15 05:37:05.909159] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.431 [2024-12-15 05:37:05.909166] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.431 [2024-12-15 05:37:05.909171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.431 [2024-12-15 05:37:05.909185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.431 qpair failed and we were unable to recover it. 00:36:52.431 [2024-12-15 05:37:05.919099] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.431 [2024-12-15 05:37:05.919151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.431 [2024-12-15 05:37:05.919164] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.431 [2024-12-15 05:37:05.919171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.431 [2024-12-15 05:37:05.919177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.431 [2024-12-15 05:37:05.919191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.431 qpair failed and we were unable to recover it. 00:36:52.431 [2024-12-15 05:37:05.929173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.431 [2024-12-15 05:37:05.929227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.431 [2024-12-15 05:37:05.929240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.431 [2024-12-15 05:37:05.929246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.431 [2024-12-15 05:37:05.929252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.431 [2024-12-15 05:37:05.929267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.431 qpair failed and we were unable to recover it. 00:36:52.431 [2024-12-15 05:37:05.939135] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.431 [2024-12-15 05:37:05.939185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.431 [2024-12-15 05:37:05.939198] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.431 [2024-12-15 05:37:05.939205] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.431 [2024-12-15 05:37:05.939211] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.431 [2024-12-15 05:37:05.939225] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.431 qpair failed and we were unable to recover it. 00:36:52.431 [2024-12-15 05:37:05.949210] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.431 [2024-12-15 05:37:05.949320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.431 [2024-12-15 05:37:05.949333] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.431 [2024-12-15 05:37:05.949339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.431 [2024-12-15 05:37:05.949345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.431 [2024-12-15 05:37:05.949359] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.431 qpair failed and we were unable to recover it. 00:36:52.431 [2024-12-15 05:37:05.959160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.431 [2024-12-15 05:37:05.959214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.431 [2024-12-15 05:37:05.959226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.431 [2024-12-15 05:37:05.959232] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.431 [2024-12-15 05:37:05.959238] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.431 [2024-12-15 05:37:05.959253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.431 qpair failed and we were unable to recover it. 00:36:52.431 [2024-12-15 05:37:05.969212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.431 [2024-12-15 05:37:05.969266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.431 [2024-12-15 05:37:05.969279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.431 [2024-12-15 05:37:05.969285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.431 [2024-12-15 05:37:05.969291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.431 [2024-12-15 05:37:05.969305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.431 qpair failed and we were unable to recover it. 00:36:52.431 [2024-12-15 05:37:05.979257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:05.979308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:05.979320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:05.979326] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:05.979333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:05.979347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.432 [2024-12-15 05:37:05.989238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:05.989294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:05.989310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:05.989316] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:05.989322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:05.989337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.432 [2024-12-15 05:37:05.999282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:05.999341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:05.999353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:05.999359] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:05.999365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:05.999379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.432 [2024-12-15 05:37:06.009379] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:06.009464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:06.009476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:06.009482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:06.009488] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:06.009502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.432 [2024-12-15 05:37:06.019322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:06.019374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:06.019386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:06.019392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:06.019398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:06.019412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.432 [2024-12-15 05:37:06.029434] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:06.029512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:06.029529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:06.029535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:06.029541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:06.029559] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.432 [2024-12-15 05:37:06.039523] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:06.039579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:06.039592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:06.039600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:06.039605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:06.039620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.432 [2024-12-15 05:37:06.049415] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:06.049467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:06.049479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:06.049486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:06.049492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:06.049507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.432 [2024-12-15 05:37:06.059447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:06.059504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:06.059516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:06.059522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:06.059529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:06.059543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.432 [2024-12-15 05:37:06.069493] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:06.069553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:06.069566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:06.069572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:06.069578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:06.069593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.432 [2024-12-15 05:37:06.079502] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:06.079554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:06.079568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:06.079575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:06.079581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:06.079596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.432 [2024-12-15 05:37:06.089526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:06.089574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:06.089587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:06.089594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:06.089600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:06.089614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.432 [2024-12-15 05:37:06.099624] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:06.099711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:06.099723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:06.099730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:06.099736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:06.099750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.432 [2024-12-15 05:37:06.109741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.432 [2024-12-15 05:37:06.109805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.432 [2024-12-15 05:37:06.109818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.432 [2024-12-15 05:37:06.109824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.432 [2024-12-15 05:37:06.109831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.432 [2024-12-15 05:37:06.109845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.432 qpair failed and we were unable to recover it. 00:36:52.693 [2024-12-15 05:37:06.119727] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.119782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.694 [2024-12-15 05:37:06.119798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.694 [2024-12-15 05:37:06.119805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.694 [2024-12-15 05:37:06.119811] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.694 [2024-12-15 05:37:06.119825] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.694 qpair failed and we were unable to recover it. 00:36:52.694 [2024-12-15 05:37:06.129686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.129788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.694 [2024-12-15 05:37:06.129801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.694 [2024-12-15 05:37:06.129808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.694 [2024-12-15 05:37:06.129814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.694 [2024-12-15 05:37:06.129828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.694 qpair failed and we were unable to recover it. 00:36:52.694 [2024-12-15 05:37:06.139763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.139823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.694 [2024-12-15 05:37:06.139837] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.694 [2024-12-15 05:37:06.139844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.694 [2024-12-15 05:37:06.139850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.694 [2024-12-15 05:37:06.139864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.694 qpair failed and we were unable to recover it. 00:36:52.694 [2024-12-15 05:37:06.149782] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.149837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.694 [2024-12-15 05:37:06.149850] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.694 [2024-12-15 05:37:06.149857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.694 [2024-12-15 05:37:06.149863] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.694 [2024-12-15 05:37:06.149877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.694 qpair failed and we were unable to recover it. 00:36:52.694 [2024-12-15 05:37:06.159790] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.159847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.694 [2024-12-15 05:37:06.159860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.694 [2024-12-15 05:37:06.159867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.694 [2024-12-15 05:37:06.159876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.694 [2024-12-15 05:37:06.159890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.694 qpair failed and we were unable to recover it. 00:36:52.694 [2024-12-15 05:37:06.169815] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.169868] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.694 [2024-12-15 05:37:06.169881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.694 [2024-12-15 05:37:06.169887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.694 [2024-12-15 05:37:06.169893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.694 [2024-12-15 05:37:06.169908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.694 qpair failed and we were unable to recover it. 00:36:52.694 [2024-12-15 05:37:06.179840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.179891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.694 [2024-12-15 05:37:06.179903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.694 [2024-12-15 05:37:06.179910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.694 [2024-12-15 05:37:06.179916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.694 [2024-12-15 05:37:06.179930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.694 qpair failed and we were unable to recover it. 00:36:52.694 [2024-12-15 05:37:06.189884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.189940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.694 [2024-12-15 05:37:06.189952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.694 [2024-12-15 05:37:06.189959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.694 [2024-12-15 05:37:06.189965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.694 [2024-12-15 05:37:06.189979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.694 qpair failed and we were unable to recover it. 00:36:52.694 [2024-12-15 05:37:06.199905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.199961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.694 [2024-12-15 05:37:06.199974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.694 [2024-12-15 05:37:06.199981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.694 [2024-12-15 05:37:06.199987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.694 [2024-12-15 05:37:06.200008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.694 qpair failed and we were unable to recover it. 00:36:52.694 [2024-12-15 05:37:06.209934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.209988] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.694 [2024-12-15 05:37:06.210007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.694 [2024-12-15 05:37:06.210014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.694 [2024-12-15 05:37:06.210020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.694 [2024-12-15 05:37:06.210034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.694 qpair failed and we were unable to recover it. 00:36:52.694 [2024-12-15 05:37:06.219963] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.220021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.694 [2024-12-15 05:37:06.220035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.694 [2024-12-15 05:37:06.220041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.694 [2024-12-15 05:37:06.220047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.694 [2024-12-15 05:37:06.220061] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.694 qpair failed and we were unable to recover it. 00:36:52.694 [2024-12-15 05:37:06.229999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.230070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.694 [2024-12-15 05:37:06.230082] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.694 [2024-12-15 05:37:06.230089] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.694 [2024-12-15 05:37:06.230095] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.694 [2024-12-15 05:37:06.230109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.694 qpair failed and we were unable to recover it. 00:36:52.694 [2024-12-15 05:37:06.240011] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.240102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.694 [2024-12-15 05:37:06.240115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.694 [2024-12-15 05:37:06.240122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.694 [2024-12-15 05:37:06.240127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.694 [2024-12-15 05:37:06.240142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.694 qpair failed and we were unable to recover it. 00:36:52.694 [2024-12-15 05:37:06.250022] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.694 [2024-12-15 05:37:06.250079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.695 [2024-12-15 05:37:06.250095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.695 [2024-12-15 05:37:06.250102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.695 [2024-12-15 05:37:06.250108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.695 [2024-12-15 05:37:06.250122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.695 qpair failed and we were unable to recover it. 00:36:52.695 [2024-12-15 05:37:06.260053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.695 [2024-12-15 05:37:06.260110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.695 [2024-12-15 05:37:06.260123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.695 [2024-12-15 05:37:06.260130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.695 [2024-12-15 05:37:06.260136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.695 [2024-12-15 05:37:06.260151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.695 qpair failed and we were unable to recover it. 00:36:52.695 [2024-12-15 05:37:06.270155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.695 [2024-12-15 05:37:06.270258] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.695 [2024-12-15 05:37:06.270271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.695 [2024-12-15 05:37:06.270278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.695 [2024-12-15 05:37:06.270283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.695 [2024-12-15 05:37:06.270298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.695 qpair failed and we were unable to recover it. 00:36:52.695 [2024-12-15 05:37:06.280125] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.695 [2024-12-15 05:37:06.280199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.695 [2024-12-15 05:37:06.280212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.695 [2024-12-15 05:37:06.280218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.695 [2024-12-15 05:37:06.280224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.695 [2024-12-15 05:37:06.280239] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.695 qpair failed and we were unable to recover it. 00:36:52.695 [2024-12-15 05:37:06.290155] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.695 [2024-12-15 05:37:06.290211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.695 [2024-12-15 05:37:06.290224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.695 [2024-12-15 05:37:06.290233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.695 [2024-12-15 05:37:06.290240] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.695 [2024-12-15 05:37:06.290254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.695 qpair failed and we were unable to recover it. 00:36:52.695 [2024-12-15 05:37:06.300107] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.695 [2024-12-15 05:37:06.300159] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.695 [2024-12-15 05:37:06.300172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.695 [2024-12-15 05:37:06.300178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.695 [2024-12-15 05:37:06.300184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.695 [2024-12-15 05:37:06.300198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.695 qpair failed and we were unable to recover it. 00:36:52.695 [2024-12-15 05:37:06.310212] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.695 [2024-12-15 05:37:06.310271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.695 [2024-12-15 05:37:06.310283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.695 [2024-12-15 05:37:06.310290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.695 [2024-12-15 05:37:06.310296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.695 [2024-12-15 05:37:06.310310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.695 qpair failed and we were unable to recover it. 00:36:52.695 [2024-12-15 05:37:06.320238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.695 [2024-12-15 05:37:06.320290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.695 [2024-12-15 05:37:06.320302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.695 [2024-12-15 05:37:06.320308] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.695 [2024-12-15 05:37:06.320314] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.695 [2024-12-15 05:37:06.320329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.695 qpair failed and we were unable to recover it. 00:36:52.695 [2024-12-15 05:37:06.330297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.695 [2024-12-15 05:37:06.330351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.695 [2024-12-15 05:37:06.330363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.695 [2024-12-15 05:37:06.330369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.695 [2024-12-15 05:37:06.330375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.695 [2024-12-15 05:37:06.330390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.695 qpair failed and we were unable to recover it. 00:36:52.695 [2024-12-15 05:37:06.340285] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.695 [2024-12-15 05:37:06.340334] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.695 [2024-12-15 05:37:06.340347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.695 [2024-12-15 05:37:06.340353] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.695 [2024-12-15 05:37:06.340359] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.695 [2024-12-15 05:37:06.340373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.695 qpair failed and we were unable to recover it. 00:36:52.695 [2024-12-15 05:37:06.350322] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.695 [2024-12-15 05:37:06.350375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.695 [2024-12-15 05:37:06.350387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.695 [2024-12-15 05:37:06.350393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.695 [2024-12-15 05:37:06.350399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.695 [2024-12-15 05:37:06.350413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.695 qpair failed and we were unable to recover it. 00:36:52.695 [2024-12-15 05:37:06.360359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.695 [2024-12-15 05:37:06.360418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.695 [2024-12-15 05:37:06.360430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.695 [2024-12-15 05:37:06.360436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.695 [2024-12-15 05:37:06.360442] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.695 [2024-12-15 05:37:06.360456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.695 qpair failed and we were unable to recover it. 00:36:52.695 [2024-12-15 05:37:06.370495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.695 [2024-12-15 05:37:06.370554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.695 [2024-12-15 05:37:06.370566] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.695 [2024-12-15 05:37:06.370573] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.695 [2024-12-15 05:37:06.370579] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.695 [2024-12-15 05:37:06.370593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.695 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-15 05:37:06.380444] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.957 [2024-12-15 05:37:06.380501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.957 [2024-12-15 05:37:06.380514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.957 [2024-12-15 05:37:06.380520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.957 [2024-12-15 05:37:06.380526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.957 [2024-12-15 05:37:06.380540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-15 05:37:06.390489] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.957 [2024-12-15 05:37:06.390545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.957 [2024-12-15 05:37:06.390558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.957 [2024-12-15 05:37:06.390564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.957 [2024-12-15 05:37:06.390570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.957 [2024-12-15 05:37:06.390584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-15 05:37:06.400539] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.957 [2024-12-15 05:37:06.400601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.957 [2024-12-15 05:37:06.400613] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.957 [2024-12-15 05:37:06.400620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.957 [2024-12-15 05:37:06.400626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.957 [2024-12-15 05:37:06.400640] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-15 05:37:06.410486] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.957 [2024-12-15 05:37:06.410536] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.957 [2024-12-15 05:37:06.410549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.957 [2024-12-15 05:37:06.410555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.957 [2024-12-15 05:37:06.410561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.957 [2024-12-15 05:37:06.410575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-15 05:37:06.420514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.957 [2024-12-15 05:37:06.420566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.957 [2024-12-15 05:37:06.420578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.957 [2024-12-15 05:37:06.420587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.957 [2024-12-15 05:37:06.420593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.957 [2024-12-15 05:37:06.420607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-15 05:37:06.430605] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.957 [2024-12-15 05:37:06.430713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.957 [2024-12-15 05:37:06.430726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.957 [2024-12-15 05:37:06.430732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.957 [2024-12-15 05:37:06.430738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.957 [2024-12-15 05:37:06.430752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-15 05:37:06.440572] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.957 [2024-12-15 05:37:06.440622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.957 [2024-12-15 05:37:06.440635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.957 [2024-12-15 05:37:06.440641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.957 [2024-12-15 05:37:06.440647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.957 [2024-12-15 05:37:06.440661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-15 05:37:06.450613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.957 [2024-12-15 05:37:06.450670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.957 [2024-12-15 05:37:06.450684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.957 [2024-12-15 05:37:06.450690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.957 [2024-12-15 05:37:06.450696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.957 [2024-12-15 05:37:06.450710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-15 05:37:06.460646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.957 [2024-12-15 05:37:06.460739] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.957 [2024-12-15 05:37:06.460753] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.957 [2024-12-15 05:37:06.460759] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.957 [2024-12-15 05:37:06.460765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.957 [2024-12-15 05:37:06.460783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-15 05:37:06.470688] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.957 [2024-12-15 05:37:06.470748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.957 [2024-12-15 05:37:06.470760] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.957 [2024-12-15 05:37:06.470767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.957 [2024-12-15 05:37:06.470773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.957 [2024-12-15 05:37:06.470788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-15 05:37:06.480700] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.957 [2024-12-15 05:37:06.480752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.957 [2024-12-15 05:37:06.480765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.957 [2024-12-15 05:37:06.480771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.957 [2024-12-15 05:37:06.480777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.957 [2024-12-15 05:37:06.480792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-15 05:37:06.490764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.957 [2024-12-15 05:37:06.490831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.957 [2024-12-15 05:37:06.490845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.957 [2024-12-15 05:37:06.490852] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.957 [2024-12-15 05:37:06.490859] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.957 [2024-12-15 05:37:06.490873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.957 qpair failed and we were unable to recover it. 00:36:52.957 [2024-12-15 05:37:06.500791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.957 [2024-12-15 05:37:06.500858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.500872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.500878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.958 [2024-12-15 05:37:06.500884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.958 [2024-12-15 05:37:06.500898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-15 05:37:06.510832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.958 [2024-12-15 05:37:06.510891] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.510904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.510910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.958 [2024-12-15 05:37:06.510916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.958 [2024-12-15 05:37:06.510930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-15 05:37:06.520864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.958 [2024-12-15 05:37:06.520923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.520936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.520943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.958 [2024-12-15 05:37:06.520949] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.958 [2024-12-15 05:37:06.520963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-15 05:37:06.530834] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.958 [2024-12-15 05:37:06.530897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.530935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.530945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.958 [2024-12-15 05:37:06.530952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.958 [2024-12-15 05:37:06.530978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-15 05:37:06.540883] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.958 [2024-12-15 05:37:06.540944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.540958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.540965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.958 [2024-12-15 05:37:06.540970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.958 [2024-12-15 05:37:06.540985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-15 05:37:06.550886] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.958 [2024-12-15 05:37:06.550944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.550961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.550968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.958 [2024-12-15 05:37:06.550974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.958 [2024-12-15 05:37:06.550989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-15 05:37:06.560970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.958 [2024-12-15 05:37:06.561038] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.561052] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.561059] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.958 [2024-12-15 05:37:06.561065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.958 [2024-12-15 05:37:06.561080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-15 05:37:06.570881] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.958 [2024-12-15 05:37:06.570964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.570978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.570984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.958 [2024-12-15 05:37:06.570990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.958 [2024-12-15 05:37:06.571009] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-15 05:37:06.580973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.958 [2024-12-15 05:37:06.581036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.581049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.581055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.958 [2024-12-15 05:37:06.581062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.958 [2024-12-15 05:37:06.581077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-15 05:37:06.591031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.958 [2024-12-15 05:37:06.591094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.591106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.591113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.958 [2024-12-15 05:37:06.591121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.958 [2024-12-15 05:37:06.591136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-15 05:37:06.601043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.958 [2024-12-15 05:37:06.601114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.601127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.601133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.958 [2024-12-15 05:37:06.601139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.958 [2024-12-15 05:37:06.601153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-15 05:37:06.611061] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.958 [2024-12-15 05:37:06.611127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.611139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.611145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.958 [2024-12-15 05:37:06.611151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.958 [2024-12-15 05:37:06.611166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-15 05:37:06.621066] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.958 [2024-12-15 05:37:06.621164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.621177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.621183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.958 [2024-12-15 05:37:06.621189] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.958 [2024-12-15 05:37:06.621203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.958 qpair failed and we were unable to recover it. 00:36:52.958 [2024-12-15 05:37:06.631163] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.958 [2024-12-15 05:37:06.631219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.958 [2024-12-15 05:37:06.631232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.958 [2024-12-15 05:37:06.631238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.959 [2024-12-15 05:37:06.631244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.959 [2024-12-15 05:37:06.631258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.959 qpair failed and we were unable to recover it. 00:36:52.959 [2024-12-15 05:37:06.641179] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:52.959 [2024-12-15 05:37:06.641255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:52.959 [2024-12-15 05:37:06.641269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:52.959 [2024-12-15 05:37:06.641275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:52.959 [2024-12-15 05:37:06.641281] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:52.959 [2024-12-15 05:37:06.641296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:52.959 qpair failed and we were unable to recover it. 00:36:53.220 [2024-12-15 05:37:06.651186] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.220 [2024-12-15 05:37:06.651241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.220 [2024-12-15 05:37:06.651254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.220 [2024-12-15 05:37:06.651261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.220 [2024-12-15 05:37:06.651267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.220 [2024-12-15 05:37:06.651281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-12-15 05:37:06.661218] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.220 [2024-12-15 05:37:06.661270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.220 [2024-12-15 05:37:06.661283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.220 [2024-12-15 05:37:06.661289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.220 [2024-12-15 05:37:06.661295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.220 [2024-12-15 05:37:06.661310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-12-15 05:37:06.671252] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.220 [2024-12-15 05:37:06.671308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.220 [2024-12-15 05:37:06.671321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.220 [2024-12-15 05:37:06.671327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.220 [2024-12-15 05:37:06.671333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.220 [2024-12-15 05:37:06.671347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-12-15 05:37:06.681287] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.220 [2024-12-15 05:37:06.681355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.220 [2024-12-15 05:37:06.681371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.220 [2024-12-15 05:37:06.681378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.220 [2024-12-15 05:37:06.681384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.220 [2024-12-15 05:37:06.681398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-12-15 05:37:06.691301] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.220 [2024-12-15 05:37:06.691353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.220 [2024-12-15 05:37:06.691366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.220 [2024-12-15 05:37:06.691372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.220 [2024-12-15 05:37:06.691379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.220 [2024-12-15 05:37:06.691393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-12-15 05:37:06.701342] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.220 [2024-12-15 05:37:06.701397] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.220 [2024-12-15 05:37:06.701409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.220 [2024-12-15 05:37:06.701415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.220 [2024-12-15 05:37:06.701421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.220 [2024-12-15 05:37:06.701435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.220 qpair failed and we were unable to recover it. 00:36:53.220 [2024-12-15 05:37:06.711373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.220 [2024-12-15 05:37:06.711427] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.220 [2024-12-15 05:37:06.711440] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.220 [2024-12-15 05:37:06.711446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.220 [2024-12-15 05:37:06.711452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.221 [2024-12-15 05:37:06.711466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-12-15 05:37:06.721386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.221 [2024-12-15 05:37:06.721441] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.221 [2024-12-15 05:37:06.721453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.221 [2024-12-15 05:37:06.721459] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.221 [2024-12-15 05:37:06.721468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.221 [2024-12-15 05:37:06.721482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-12-15 05:37:06.731448] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.221 [2024-12-15 05:37:06.731500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.221 [2024-12-15 05:37:06.731513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.221 [2024-12-15 05:37:06.731519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.221 [2024-12-15 05:37:06.731525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.221 [2024-12-15 05:37:06.731539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-12-15 05:37:06.741432] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.221 [2024-12-15 05:37:06.741484] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.221 [2024-12-15 05:37:06.741497] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.221 [2024-12-15 05:37:06.741503] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.221 [2024-12-15 05:37:06.741509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.221 [2024-12-15 05:37:06.741523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-12-15 05:37:06.751538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.221 [2024-12-15 05:37:06.751597] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.221 [2024-12-15 05:37:06.751609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.221 [2024-12-15 05:37:06.751615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.221 [2024-12-15 05:37:06.751621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.221 [2024-12-15 05:37:06.751636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-12-15 05:37:06.761503] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.221 [2024-12-15 05:37:06.761553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.221 [2024-12-15 05:37:06.761565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.221 [2024-12-15 05:37:06.761572] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.221 [2024-12-15 05:37:06.761578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.221 [2024-12-15 05:37:06.761592] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-12-15 05:37:06.771544] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.221 [2024-12-15 05:37:06.771591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.221 [2024-12-15 05:37:06.771604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.221 [2024-12-15 05:37:06.771610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.221 [2024-12-15 05:37:06.771616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.221 [2024-12-15 05:37:06.771631] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-12-15 05:37:06.781578] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.221 [2024-12-15 05:37:06.781634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.221 [2024-12-15 05:37:06.781646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.221 [2024-12-15 05:37:06.781653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.221 [2024-12-15 05:37:06.781659] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.221 [2024-12-15 05:37:06.781673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-12-15 05:37:06.791645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.221 [2024-12-15 05:37:06.791702] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.221 [2024-12-15 05:37:06.791714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.221 [2024-12-15 05:37:06.791721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.221 [2024-12-15 05:37:06.791727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.221 [2024-12-15 05:37:06.791741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-12-15 05:37:06.801684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.221 [2024-12-15 05:37:06.801775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.221 [2024-12-15 05:37:06.801787] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.221 [2024-12-15 05:37:06.801793] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.221 [2024-12-15 05:37:06.801799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.221 [2024-12-15 05:37:06.801814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-12-15 05:37:06.811634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.221 [2024-12-15 05:37:06.811687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.221 [2024-12-15 05:37:06.811703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.221 [2024-12-15 05:37:06.811709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.221 [2024-12-15 05:37:06.811715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.221 [2024-12-15 05:37:06.811730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-12-15 05:37:06.821661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.221 [2024-12-15 05:37:06.821719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.221 [2024-12-15 05:37:06.821732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.221 [2024-12-15 05:37:06.821738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.221 [2024-12-15 05:37:06.821745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.221 [2024-12-15 05:37:06.821759] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-12-15 05:37:06.831705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.221 [2024-12-15 05:37:06.831761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.221 [2024-12-15 05:37:06.831774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.221 [2024-12-15 05:37:06.831781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.221 [2024-12-15 05:37:06.831787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.221 [2024-12-15 05:37:06.831801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.221 qpair failed and we were unable to recover it. 00:36:53.221 [2024-12-15 05:37:06.841731] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.221 [2024-12-15 05:37:06.841782] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.222 [2024-12-15 05:37:06.841796] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.222 [2024-12-15 05:37:06.841802] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.222 [2024-12-15 05:37:06.841808] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.222 [2024-12-15 05:37:06.841822] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-12-15 05:37:06.851751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.222 [2024-12-15 05:37:06.851807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.222 [2024-12-15 05:37:06.851820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.222 [2024-12-15 05:37:06.851830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.222 [2024-12-15 05:37:06.851836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.222 [2024-12-15 05:37:06.851850] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-12-15 05:37:06.861785] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.222 [2024-12-15 05:37:06.861835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.222 [2024-12-15 05:37:06.861848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.222 [2024-12-15 05:37:06.861855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.222 [2024-12-15 05:37:06.861861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.222 [2024-12-15 05:37:06.861876] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-12-15 05:37:06.871765] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.222 [2024-12-15 05:37:06.871822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.222 [2024-12-15 05:37:06.871835] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.222 [2024-12-15 05:37:06.871842] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.222 [2024-12-15 05:37:06.871848] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.222 [2024-12-15 05:37:06.871863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-12-15 05:37:06.881851] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.222 [2024-12-15 05:37:06.881905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.222 [2024-12-15 05:37:06.881918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.222 [2024-12-15 05:37:06.881924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.222 [2024-12-15 05:37:06.881930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.222 [2024-12-15 05:37:06.881945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-12-15 05:37:06.891877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.222 [2024-12-15 05:37:06.891948] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.222 [2024-12-15 05:37:06.891960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.222 [2024-12-15 05:37:06.891967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.222 [2024-12-15 05:37:06.891973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.222 [2024-12-15 05:37:06.891988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.222 [2024-12-15 05:37:06.901909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.222 [2024-12-15 05:37:06.901964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.222 [2024-12-15 05:37:06.901977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.222 [2024-12-15 05:37:06.901983] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.222 [2024-12-15 05:37:06.901989] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.222 [2024-12-15 05:37:06.902008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.222 qpair failed and we were unable to recover it. 00:36:53.483 [2024-12-15 05:37:06.911904] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.483 [2024-12-15 05:37:06.911960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.483 [2024-12-15 05:37:06.911973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.483 [2024-12-15 05:37:06.911980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.483 [2024-12-15 05:37:06.911986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.483 [2024-12-15 05:37:06.912004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.483 qpair failed and we were unable to recover it. 00:36:53.483 [2024-12-15 05:37:06.921970] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.483 [2024-12-15 05:37:06.922031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.483 [2024-12-15 05:37:06.922044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.483 [2024-12-15 05:37:06.922051] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.483 [2024-12-15 05:37:06.922056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.483 [2024-12-15 05:37:06.922070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.483 qpair failed and we were unable to recover it. 00:36:53.483 [2024-12-15 05:37:06.931997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.483 [2024-12-15 05:37:06.932049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.483 [2024-12-15 05:37:06.932063] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.483 [2024-12-15 05:37:06.932069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.483 [2024-12-15 05:37:06.932075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.483 [2024-12-15 05:37:06.932089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.483 qpair failed and we were unable to recover it. 00:36:53.483 [2024-12-15 05:37:06.942033] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.483 [2024-12-15 05:37:06.942090] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.483 [2024-12-15 05:37:06.942103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.483 [2024-12-15 05:37:06.942110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.483 [2024-12-15 05:37:06.942116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.483 [2024-12-15 05:37:06.942130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.483 qpair failed and we were unable to recover it. 00:36:53.483 [2024-12-15 05:37:06.952068] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.483 [2024-12-15 05:37:06.952123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.483 [2024-12-15 05:37:06.952136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.483 [2024-12-15 05:37:06.952142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.483 [2024-12-15 05:37:06.952148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.483 [2024-12-15 05:37:06.952163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.483 qpair failed and we were unable to recover it. 00:36:53.483 [2024-12-15 05:37:06.962081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.483 [2024-12-15 05:37:06.962139] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.483 [2024-12-15 05:37:06.962152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.483 [2024-12-15 05:37:06.962158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.483 [2024-12-15 05:37:06.962164] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.483 [2024-12-15 05:37:06.962179] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.483 qpair failed and we were unable to recover it. 00:36:53.483 [2024-12-15 05:37:06.972105] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.483 [2024-12-15 05:37:06.972158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.483 [2024-12-15 05:37:06.972171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.483 [2024-12-15 05:37:06.972177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.483 [2024-12-15 05:37:06.972183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.483 [2024-12-15 05:37:06.972197] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.483 qpair failed and we were unable to recover it. 00:36:53.483 [2024-12-15 05:37:06.982136] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.483 [2024-12-15 05:37:06.982198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.483 [2024-12-15 05:37:06.982211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.483 [2024-12-15 05:37:06.982221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.483 [2024-12-15 05:37:06.982227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.483 [2024-12-15 05:37:06.982242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.483 qpair failed and we were unable to recover it. 00:36:53.484 [2024-12-15 05:37:06.992185] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.484 [2024-12-15 05:37:06.992255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.484 [2024-12-15 05:37:06.992268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.484 [2024-12-15 05:37:06.992274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.484 [2024-12-15 05:37:06.992280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.484 [2024-12-15 05:37:06.992294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.484 qpair failed and we were unable to recover it. 00:36:53.484 [2024-12-15 05:37:07.002196] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.484 [2024-12-15 05:37:07.002251] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.484 [2024-12-15 05:37:07.002263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.484 [2024-12-15 05:37:07.002269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.484 [2024-12-15 05:37:07.002275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.484 [2024-12-15 05:37:07.002289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.484 qpair failed and we were unable to recover it. 00:36:53.484 [2024-12-15 05:37:07.012257] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.484 [2024-12-15 05:37:07.012313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.484 [2024-12-15 05:37:07.012326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.484 [2024-12-15 05:37:07.012333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.484 [2024-12-15 05:37:07.012339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.484 [2024-12-15 05:37:07.012353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.484 qpair failed and we were unable to recover it. 00:36:53.484 [2024-12-15 05:37:07.022249] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.484 [2024-12-15 05:37:07.022298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.484 [2024-12-15 05:37:07.022311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.484 [2024-12-15 05:37:07.022317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.484 [2024-12-15 05:37:07.022323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.484 [2024-12-15 05:37:07.022340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.484 qpair failed and we were unable to recover it. 00:36:53.484 [2024-12-15 05:37:07.032294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.484 [2024-12-15 05:37:07.032351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.484 [2024-12-15 05:37:07.032363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.484 [2024-12-15 05:37:07.032370] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.484 [2024-12-15 05:37:07.032375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.484 [2024-12-15 05:37:07.032390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.484 qpair failed and we were unable to recover it. 00:36:53.484 [2024-12-15 05:37:07.042352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.484 [2024-12-15 05:37:07.042446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.484 [2024-12-15 05:37:07.042459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.484 [2024-12-15 05:37:07.042465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.484 [2024-12-15 05:37:07.042470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.484 [2024-12-15 05:37:07.042484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.484 qpair failed and we were unable to recover it. 00:36:53.484 [2024-12-15 05:37:07.052329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.484 [2024-12-15 05:37:07.052375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.484 [2024-12-15 05:37:07.052387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.484 [2024-12-15 05:37:07.052393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.484 [2024-12-15 05:37:07.052399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.484 [2024-12-15 05:37:07.052413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.484 qpair failed and we were unable to recover it. 00:36:53.484 [2024-12-15 05:37:07.062359] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.484 [2024-12-15 05:37:07.062415] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.484 [2024-12-15 05:37:07.062427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.484 [2024-12-15 05:37:07.062434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.484 [2024-12-15 05:37:07.062439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.484 [2024-12-15 05:37:07.062454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.484 qpair failed and we were unable to recover it. 00:36:53.484 [2024-12-15 05:37:07.072431] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.484 [2024-12-15 05:37:07.072486] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.484 [2024-12-15 05:37:07.072499] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.484 [2024-12-15 05:37:07.072505] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.484 [2024-12-15 05:37:07.072511] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.484 [2024-12-15 05:37:07.072525] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.484 qpair failed and we were unable to recover it. 00:36:53.484 [2024-12-15 05:37:07.082341] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.484 [2024-12-15 05:37:07.082394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.484 [2024-12-15 05:37:07.082407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.484 [2024-12-15 05:37:07.082413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.484 [2024-12-15 05:37:07.082419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.484 [2024-12-15 05:37:07.082433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.484 qpair failed and we were unable to recover it. 00:36:53.484 [2024-12-15 05:37:07.092435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.484 [2024-12-15 05:37:07.092501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.484 [2024-12-15 05:37:07.092514] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.484 [2024-12-15 05:37:07.092521] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.484 [2024-12-15 05:37:07.092526] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.484 [2024-12-15 05:37:07.092541] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.484 qpair failed and we were unable to recover it. 00:36:53.484 [2024-12-15 05:37:07.102483] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.484 [2024-12-15 05:37:07.102723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.484 [2024-12-15 05:37:07.102739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.484 [2024-12-15 05:37:07.102745] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.484 [2024-12-15 05:37:07.102751] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.484 [2024-12-15 05:37:07.102766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.484 qpair failed and we were unable to recover it. 00:36:53.484 [2024-12-15 05:37:07.112504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.485 [2024-12-15 05:37:07.112555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.485 [2024-12-15 05:37:07.112571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.485 [2024-12-15 05:37:07.112577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.485 [2024-12-15 05:37:07.112583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.485 [2024-12-15 05:37:07.112597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.485 qpair failed and we were unable to recover it. 00:36:53.485 [2024-12-15 05:37:07.122519] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.485 [2024-12-15 05:37:07.122573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.485 [2024-12-15 05:37:07.122586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.485 [2024-12-15 05:37:07.122592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.485 [2024-12-15 05:37:07.122598] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.485 [2024-12-15 05:37:07.122612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.485 qpair failed and we were unable to recover it. 00:36:53.485 [2024-12-15 05:37:07.132546] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.485 [2024-12-15 05:37:07.132595] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.485 [2024-12-15 05:37:07.132607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.485 [2024-12-15 05:37:07.132613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.485 [2024-12-15 05:37:07.132620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.485 [2024-12-15 05:37:07.132634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.485 qpair failed and we were unable to recover it. 00:36:53.485 [2024-12-15 05:37:07.142576] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.485 [2024-12-15 05:37:07.142625] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.485 [2024-12-15 05:37:07.142638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.485 [2024-12-15 05:37:07.142645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.485 [2024-12-15 05:37:07.142651] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.485 [2024-12-15 05:37:07.142665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.485 qpair failed and we were unable to recover it. 00:36:53.485 [2024-12-15 05:37:07.152681] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.485 [2024-12-15 05:37:07.152737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.485 [2024-12-15 05:37:07.152750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.485 [2024-12-15 05:37:07.152756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.485 [2024-12-15 05:37:07.152765] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.485 [2024-12-15 05:37:07.152780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.485 qpair failed and we were unable to recover it. 00:36:53.485 [2024-12-15 05:37:07.162626] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.485 [2024-12-15 05:37:07.162683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.485 [2024-12-15 05:37:07.162696] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.485 [2024-12-15 05:37:07.162703] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.485 [2024-12-15 05:37:07.162708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.485 [2024-12-15 05:37:07.162722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.485 qpair failed and we were unable to recover it. 00:36:53.746 [2024-12-15 05:37:07.172668] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.746 [2024-12-15 05:37:07.172762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.746 [2024-12-15 05:37:07.172775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.746 [2024-12-15 05:37:07.172781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.746 [2024-12-15 05:37:07.172788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.746 [2024-12-15 05:37:07.172802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.746 qpair failed and we were unable to recover it. 00:36:53.746 [2024-12-15 05:37:07.182705] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.746 [2024-12-15 05:37:07.182755] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.746 [2024-12-15 05:37:07.182768] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.746 [2024-12-15 05:37:07.182774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.746 [2024-12-15 05:37:07.182780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.746 [2024-12-15 05:37:07.182795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.746 qpair failed and we were unable to recover it. 00:36:53.746 [2024-12-15 05:37:07.192745] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.746 [2024-12-15 05:37:07.192798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.746 [2024-12-15 05:37:07.192810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.746 [2024-12-15 05:37:07.192816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.746 [2024-12-15 05:37:07.192822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.746 [2024-12-15 05:37:07.192836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.746 qpair failed and we were unable to recover it. 00:36:53.746 [2024-12-15 05:37:07.202768] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.746 [2024-12-15 05:37:07.202817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.746 [2024-12-15 05:37:07.202831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.746 [2024-12-15 05:37:07.202837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.746 [2024-12-15 05:37:07.202843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.746 [2024-12-15 05:37:07.202857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.746 qpair failed and we were unable to recover it. 00:36:53.746 [2024-12-15 05:37:07.212793] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.746 [2024-12-15 05:37:07.212843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.746 [2024-12-15 05:37:07.212857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.746 [2024-12-15 05:37:07.212864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.746 [2024-12-15 05:37:07.212870] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.746 [2024-12-15 05:37:07.212885] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.746 qpair failed and we were unable to recover it. 00:36:53.746 [2024-12-15 05:37:07.222821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.746 [2024-12-15 05:37:07.222872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.746 [2024-12-15 05:37:07.222885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.746 [2024-12-15 05:37:07.222892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.746 [2024-12-15 05:37:07.222898] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.746 [2024-12-15 05:37:07.222912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.746 qpair failed and we were unable to recover it. 00:36:53.746 [2024-12-15 05:37:07.232885] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.746 [2024-12-15 05:37:07.232938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.746 [2024-12-15 05:37:07.232951] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.746 [2024-12-15 05:37:07.232958] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.746 [2024-12-15 05:37:07.232964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.746 [2024-12-15 05:37:07.232978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.746 qpair failed and we were unable to recover it. 00:36:53.746 [2024-12-15 05:37:07.242847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.746 [2024-12-15 05:37:07.242901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.746 [2024-12-15 05:37:07.242918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.746 [2024-12-15 05:37:07.242924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.746 [2024-12-15 05:37:07.242930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.746 [2024-12-15 05:37:07.242944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.746 qpair failed and we were unable to recover it. 00:36:53.746 [2024-12-15 05:37:07.252913] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.746 [2024-12-15 05:37:07.252971] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.746 [2024-12-15 05:37:07.252984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.747 [2024-12-15 05:37:07.252990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.747 [2024-12-15 05:37:07.253002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.747 [2024-12-15 05:37:07.253017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.747 qpair failed and we were unable to recover it. 00:36:53.747 [2024-12-15 05:37:07.262863] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.747 [2024-12-15 05:37:07.262914] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.747 [2024-12-15 05:37:07.262927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.747 [2024-12-15 05:37:07.262933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.747 [2024-12-15 05:37:07.262939] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.747 [2024-12-15 05:37:07.262954] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.747 qpair failed and we were unable to recover it. 00:36:53.747 [2024-12-15 05:37:07.272999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.747 [2024-12-15 05:37:07.273054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.747 [2024-12-15 05:37:07.273068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.747 [2024-12-15 05:37:07.273074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.747 [2024-12-15 05:37:07.273080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.747 [2024-12-15 05:37:07.273094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.747 qpair failed and we were unable to recover it. 00:36:53.747 [2024-12-15 05:37:07.282920] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.747 [2024-12-15 05:37:07.282985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.747 [2024-12-15 05:37:07.283003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.747 [2024-12-15 05:37:07.283010] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.747 [2024-12-15 05:37:07.283019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.747 [2024-12-15 05:37:07.283033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.747 qpair failed and we were unable to recover it. 00:36:53.747 [2024-12-15 05:37:07.293031] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.747 [2024-12-15 05:37:07.293086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.747 [2024-12-15 05:37:07.293099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.747 [2024-12-15 05:37:07.293105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.747 [2024-12-15 05:37:07.293111] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.747 [2024-12-15 05:37:07.293126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.747 qpair failed and we were unable to recover it. 00:36:53.747 [2024-12-15 05:37:07.302972] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.747 [2024-12-15 05:37:07.303061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.747 [2024-12-15 05:37:07.303074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.747 [2024-12-15 05:37:07.303080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.747 [2024-12-15 05:37:07.303086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.747 [2024-12-15 05:37:07.303100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.747 qpair failed and we were unable to recover it. 00:36:53.747 [2024-12-15 05:37:07.313055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.747 [2024-12-15 05:37:07.313134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.747 [2024-12-15 05:37:07.313147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.747 [2024-12-15 05:37:07.313154] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.747 [2024-12-15 05:37:07.313160] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.747 [2024-12-15 05:37:07.313174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.747 qpair failed and we were unable to recover it. 00:36:53.747 [2024-12-15 05:37:07.323104] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.747 [2024-12-15 05:37:07.323158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.747 [2024-12-15 05:37:07.323171] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.747 [2024-12-15 05:37:07.323177] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.747 [2024-12-15 05:37:07.323183] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.747 [2024-12-15 05:37:07.323198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.747 qpair failed and we were unable to recover it. 00:36:53.747 [2024-12-15 05:37:07.333133] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.747 [2024-12-15 05:37:07.333188] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.747 [2024-12-15 05:37:07.333201] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.747 [2024-12-15 05:37:07.333207] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.747 [2024-12-15 05:37:07.333213] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.747 [2024-12-15 05:37:07.333228] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.747 qpair failed and we were unable to recover it. 00:36:53.747 [2024-12-15 05:37:07.343149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.747 [2024-12-15 05:37:07.343233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.747 [2024-12-15 05:37:07.343245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.747 [2024-12-15 05:37:07.343251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.747 [2024-12-15 05:37:07.343258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.747 [2024-12-15 05:37:07.343272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.747 qpair failed and we were unable to recover it. 00:36:53.747 [2024-12-15 05:37:07.353142] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.747 [2024-12-15 05:37:07.353198] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.747 [2024-12-15 05:37:07.353211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.747 [2024-12-15 05:37:07.353218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.747 [2024-12-15 05:37:07.353224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.747 [2024-12-15 05:37:07.353238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.747 qpair failed and we were unable to recover it. 00:36:53.747 [2024-12-15 05:37:07.363180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.747 [2024-12-15 05:37:07.363241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.747 [2024-12-15 05:37:07.363253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.747 [2024-12-15 05:37:07.363260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.747 [2024-12-15 05:37:07.363266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.747 [2024-12-15 05:37:07.363281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.747 qpair failed and we were unable to recover it. 00:36:53.747 [2024-12-15 05:37:07.373244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.747 [2024-12-15 05:37:07.373295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.747 [2024-12-15 05:37:07.373311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.747 [2024-12-15 05:37:07.373318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.747 [2024-12-15 05:37:07.373323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.747 [2024-12-15 05:37:07.373337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.747 qpair failed and we were unable to recover it. 00:36:53.747 [2024-12-15 05:37:07.383270] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.747 [2024-12-15 05:37:07.383323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.747 [2024-12-15 05:37:07.383336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.748 [2024-12-15 05:37:07.383342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.748 [2024-12-15 05:37:07.383349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.748 [2024-12-15 05:37:07.383362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.748 qpair failed and we were unable to recover it. 00:36:53.748 [2024-12-15 05:37:07.393310] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.748 [2024-12-15 05:37:07.393368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.748 [2024-12-15 05:37:07.393380] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.748 [2024-12-15 05:37:07.393386] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.748 [2024-12-15 05:37:07.393392] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.748 [2024-12-15 05:37:07.393407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.748 qpair failed and we were unable to recover it. 00:36:53.748 [2024-12-15 05:37:07.403349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.748 [2024-12-15 05:37:07.403398] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.748 [2024-12-15 05:37:07.403410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.748 [2024-12-15 05:37:07.403416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.748 [2024-12-15 05:37:07.403422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.748 [2024-12-15 05:37:07.403435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.748 qpair failed and we were unable to recover it. 00:36:53.748 [2024-12-15 05:37:07.413356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.748 [2024-12-15 05:37:07.413406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.748 [2024-12-15 05:37:07.413418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.748 [2024-12-15 05:37:07.413428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.748 [2024-12-15 05:37:07.413434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.748 [2024-12-15 05:37:07.413447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.748 qpair failed and we were unable to recover it. 00:36:53.748 [2024-12-15 05:37:07.423401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:53.748 [2024-12-15 05:37:07.423458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:53.748 [2024-12-15 05:37:07.423470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:53.748 [2024-12-15 05:37:07.423477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:53.748 [2024-12-15 05:37:07.423483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:53.748 [2024-12-15 05:37:07.423497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:53.748 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-15 05:37:07.433439] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.009 [2024-12-15 05:37:07.433492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.009 [2024-12-15 05:37:07.433505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.009 [2024-12-15 05:37:07.433511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.009 [2024-12-15 05:37:07.433518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.009 [2024-12-15 05:37:07.433532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-15 05:37:07.443495] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.009 [2024-12-15 05:37:07.443554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.009 [2024-12-15 05:37:07.443567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.009 [2024-12-15 05:37:07.443574] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.009 [2024-12-15 05:37:07.443580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.009 [2024-12-15 05:37:07.443594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-15 05:37:07.453467] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.009 [2024-12-15 05:37:07.453526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.009 [2024-12-15 05:37:07.453539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.009 [2024-12-15 05:37:07.453546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.009 [2024-12-15 05:37:07.453552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.009 [2024-12-15 05:37:07.453566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-15 05:37:07.463514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.009 [2024-12-15 05:37:07.463569] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.009 [2024-12-15 05:37:07.463583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.009 [2024-12-15 05:37:07.463590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.009 [2024-12-15 05:37:07.463596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.009 [2024-12-15 05:37:07.463612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.009 qpair failed and we were unable to recover it. 00:36:54.009 [2024-12-15 05:37:07.473589] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.009 [2024-12-15 05:37:07.473644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.009 [2024-12-15 05:37:07.473657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.009 [2024-12-15 05:37:07.473663] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.473669] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.473683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-15 05:37:07.483609] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.010 [2024-12-15 05:37:07.483671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.010 [2024-12-15 05:37:07.483684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.010 [2024-12-15 05:37:07.483691] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.483697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.483712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-15 05:37:07.493559] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.010 [2024-12-15 05:37:07.493612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.010 [2024-12-15 05:37:07.493625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.010 [2024-12-15 05:37:07.493631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.493637] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.493651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-15 05:37:07.503625] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.010 [2024-12-15 05:37:07.503682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.010 [2024-12-15 05:37:07.503695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.010 [2024-12-15 05:37:07.503701] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.503707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.503722] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-15 05:37:07.513617] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.010 [2024-12-15 05:37:07.513676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.010 [2024-12-15 05:37:07.513689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.010 [2024-12-15 05:37:07.513697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.513705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.513720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-15 05:37:07.523608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.010 [2024-12-15 05:37:07.523666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.010 [2024-12-15 05:37:07.523679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.010 [2024-12-15 05:37:07.523688] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.523694] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.523709] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-15 05:37:07.533690] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.010 [2024-12-15 05:37:07.533746] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.010 [2024-12-15 05:37:07.533758] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.010 [2024-12-15 05:37:07.533765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.533771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.533785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-15 05:37:07.543766] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.010 [2024-12-15 05:37:07.543817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.010 [2024-12-15 05:37:07.543830] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.010 [2024-12-15 05:37:07.543839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.543845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.543859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-15 05:37:07.553725] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.010 [2024-12-15 05:37:07.553779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.010 [2024-12-15 05:37:07.553792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.010 [2024-12-15 05:37:07.553799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.553805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.553819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-15 05:37:07.563839] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.010 [2024-12-15 05:37:07.563906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.010 [2024-12-15 05:37:07.563919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.010 [2024-12-15 05:37:07.563925] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.563931] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.563946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-15 05:37:07.573763] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.010 [2024-12-15 05:37:07.573817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.010 [2024-12-15 05:37:07.573829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.010 [2024-12-15 05:37:07.573836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.573842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.573857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-15 05:37:07.583775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.010 [2024-12-15 05:37:07.583836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.010 [2024-12-15 05:37:07.583849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.010 [2024-12-15 05:37:07.583856] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.583862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.583880] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-15 05:37:07.593841] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.010 [2024-12-15 05:37:07.593900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.010 [2024-12-15 05:37:07.593913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.010 [2024-12-15 05:37:07.593919] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.593925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.593939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.010 [2024-12-15 05:37:07.603914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.010 [2024-12-15 05:37:07.603972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.010 [2024-12-15 05:37:07.603984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.010 [2024-12-15 05:37:07.603990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.010 [2024-12-15 05:37:07.604000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.010 [2024-12-15 05:37:07.604015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.010 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-15 05:37:07.613937] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.011 [2024-12-15 05:37:07.614002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.011 [2024-12-15 05:37:07.614015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.011 [2024-12-15 05:37:07.614021] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.011 [2024-12-15 05:37:07.614027] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.011 [2024-12-15 05:37:07.614042] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-15 05:37:07.623934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.011 [2024-12-15 05:37:07.624027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.011 [2024-12-15 05:37:07.624041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.011 [2024-12-15 05:37:07.624047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.011 [2024-12-15 05:37:07.624054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.011 [2024-12-15 05:37:07.624068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-15 05:37:07.634043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.011 [2024-12-15 05:37:07.634119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.011 [2024-12-15 05:37:07.634132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.011 [2024-12-15 05:37:07.634139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.011 [2024-12-15 05:37:07.634145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.011 [2024-12-15 05:37:07.634159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-15 05:37:07.644069] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.011 [2024-12-15 05:37:07.644126] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.011 [2024-12-15 05:37:07.644139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.011 [2024-12-15 05:37:07.644145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.011 [2024-12-15 05:37:07.644151] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.011 [2024-12-15 05:37:07.644166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-15 05:37:07.654047] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.011 [2024-12-15 05:37:07.654103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.011 [2024-12-15 05:37:07.654116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.011 [2024-12-15 05:37:07.654122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.011 [2024-12-15 05:37:07.654129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.011 [2024-12-15 05:37:07.654143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-15 05:37:07.664094] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.011 [2024-12-15 05:37:07.664153] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.011 [2024-12-15 05:37:07.664165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.011 [2024-12-15 05:37:07.664171] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.011 [2024-12-15 05:37:07.664177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.011 [2024-12-15 05:37:07.664191] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-15 05:37:07.674131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.011 [2024-12-15 05:37:07.674214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.011 [2024-12-15 05:37:07.674229] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.011 [2024-12-15 05:37:07.674235] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.011 [2024-12-15 05:37:07.674241] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.011 [2024-12-15 05:37:07.674256] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.011 [2024-12-15 05:37:07.684166] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.011 [2024-12-15 05:37:07.684221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.011 [2024-12-15 05:37:07.684234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.011 [2024-12-15 05:37:07.684240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.011 [2024-12-15 05:37:07.684246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.011 [2024-12-15 05:37:07.684261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.011 qpair failed and we were unable to recover it. 00:36:54.272 [2024-12-15 05:37:07.694194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.272 [2024-12-15 05:37:07.694265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.272 [2024-12-15 05:37:07.694278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.272 [2024-12-15 05:37:07.694284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.272 [2024-12-15 05:37:07.694291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.272 [2024-12-15 05:37:07.694305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.272 qpair failed and we were unable to recover it. 00:36:54.272 [2024-12-15 05:37:07.704207] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.272 [2024-12-15 05:37:07.704261] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.272 [2024-12-15 05:37:07.704273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.272 [2024-12-15 05:37:07.704279] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.272 [2024-12-15 05:37:07.704285] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.272 [2024-12-15 05:37:07.704300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.272 qpair failed and we were unable to recover it. 00:36:54.272 [2024-12-15 05:37:07.714243] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.272 [2024-12-15 05:37:07.714298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.272 [2024-12-15 05:37:07.714312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.272 [2024-12-15 05:37:07.714318] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.272 [2024-12-15 05:37:07.714328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.272 [2024-12-15 05:37:07.714342] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.272 qpair failed and we were unable to recover it. 00:36:54.272 [2024-12-15 05:37:07.724232] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.272 [2024-12-15 05:37:07.724285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.272 [2024-12-15 05:37:07.724298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.272 [2024-12-15 05:37:07.724304] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.272 [2024-12-15 05:37:07.724310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.272 [2024-12-15 05:37:07.724324] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.272 qpair failed and we were unable to recover it. 00:36:54.272 [2024-12-15 05:37:07.734315] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.272 [2024-12-15 05:37:07.734370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.272 [2024-12-15 05:37:07.734383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.272 [2024-12-15 05:37:07.734389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.272 [2024-12-15 05:37:07.734395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.272 [2024-12-15 05:37:07.734409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.272 qpair failed and we were unable to recover it. 00:36:54.272 [2024-12-15 05:37:07.744319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.272 [2024-12-15 05:37:07.744370] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.272 [2024-12-15 05:37:07.744383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.272 [2024-12-15 05:37:07.744389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.272 [2024-12-15 05:37:07.744395] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.272 [2024-12-15 05:37:07.744409] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.272 qpair failed and we were unable to recover it. 00:36:54.272 [2024-12-15 05:37:07.754414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.272 [2024-12-15 05:37:07.754468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.272 [2024-12-15 05:37:07.754480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.272 [2024-12-15 05:37:07.754486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.272 [2024-12-15 05:37:07.754492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.272 [2024-12-15 05:37:07.754507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.272 qpair failed and we were unable to recover it. 00:36:54.272 [2024-12-15 05:37:07.764373] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.272 [2024-12-15 05:37:07.764430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.272 [2024-12-15 05:37:07.764442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.272 [2024-12-15 05:37:07.764448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.272 [2024-12-15 05:37:07.764454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.272 [2024-12-15 05:37:07.764468] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.272 qpair failed and we were unable to recover it. 00:36:54.272 [2024-12-15 05:37:07.774396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.272 [2024-12-15 05:37:07.774469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.272 [2024-12-15 05:37:07.774481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.272 [2024-12-15 05:37:07.774487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.272 [2024-12-15 05:37:07.774493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.272 [2024-12-15 05:37:07.774507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.272 qpair failed and we were unable to recover it. 00:36:54.272 [2024-12-15 05:37:07.784419] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.272 [2024-12-15 05:37:07.784474] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.272 [2024-12-15 05:37:07.784486] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.272 [2024-12-15 05:37:07.784492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.272 [2024-12-15 05:37:07.784498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.272 [2024-12-15 05:37:07.784513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.272 qpair failed and we were unable to recover it. 00:36:54.272 [2024-12-15 05:37:07.794480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.272 [2024-12-15 05:37:07.794537] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.272 [2024-12-15 05:37:07.794549] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.272 [2024-12-15 05:37:07.794556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.272 [2024-12-15 05:37:07.794562] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.272 [2024-12-15 05:37:07.794576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.272 qpair failed and we were unable to recover it. 00:36:54.272 [2024-12-15 05:37:07.804536] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.273 [2024-12-15 05:37:07.804637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.273 [2024-12-15 05:37:07.804652] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.273 [2024-12-15 05:37:07.804658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.273 [2024-12-15 05:37:07.804664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.273 [2024-12-15 05:37:07.804678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.273 qpair failed and we were unable to recover it. 00:36:54.273 [2024-12-15 05:37:07.814573] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.273 [2024-12-15 05:37:07.814665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.273 [2024-12-15 05:37:07.814678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.273 [2024-12-15 05:37:07.814684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.273 [2024-12-15 05:37:07.814690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.273 [2024-12-15 05:37:07.814704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.273 qpair failed and we were unable to recover it. 00:36:54.273 [2024-12-15 05:37:07.824482] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.273 [2024-12-15 05:37:07.824533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.273 [2024-12-15 05:37:07.824546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.273 [2024-12-15 05:37:07.824552] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.273 [2024-12-15 05:37:07.824558] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.273 [2024-12-15 05:37:07.824571] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.273 qpair failed and we were unable to recover it. 00:36:54.273 [2024-12-15 05:37:07.834615] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.273 [2024-12-15 05:37:07.834693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.273 [2024-12-15 05:37:07.834706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.273 [2024-12-15 05:37:07.834713] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.273 [2024-12-15 05:37:07.834719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.273 [2024-12-15 05:37:07.834733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.273 qpair failed and we were unable to recover it. 00:36:54.273 [2024-12-15 05:37:07.844551] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.273 [2024-12-15 05:37:07.844612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.273 [2024-12-15 05:37:07.844625] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.273 [2024-12-15 05:37:07.844631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.273 [2024-12-15 05:37:07.844640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.273 [2024-12-15 05:37:07.844655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.273 qpair failed and we were unable to recover it. 00:36:54.273 [2024-12-15 05:37:07.854629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.273 [2024-12-15 05:37:07.854679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.273 [2024-12-15 05:37:07.854692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.273 [2024-12-15 05:37:07.854698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.273 [2024-12-15 05:37:07.854704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.273 [2024-12-15 05:37:07.854718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.273 qpair failed and we were unable to recover it. 00:36:54.273 [2024-12-15 05:37:07.864661] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.273 [2024-12-15 05:37:07.864716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.273 [2024-12-15 05:37:07.864730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.273 [2024-12-15 05:37:07.864736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.273 [2024-12-15 05:37:07.864742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.273 [2024-12-15 05:37:07.864756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.273 qpair failed and we were unable to recover it. 00:36:54.273 [2024-12-15 05:37:07.874704] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.273 [2024-12-15 05:37:07.874762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.273 [2024-12-15 05:37:07.874774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.273 [2024-12-15 05:37:07.874780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.273 [2024-12-15 05:37:07.874786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.273 [2024-12-15 05:37:07.874800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.273 qpair failed and we were unable to recover it. 00:36:54.273 [2024-12-15 05:37:07.884748] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.273 [2024-12-15 05:37:07.884801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.273 [2024-12-15 05:37:07.884813] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.273 [2024-12-15 05:37:07.884820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.273 [2024-12-15 05:37:07.884825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.273 [2024-12-15 05:37:07.884840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.273 qpair failed and we were unable to recover it. 00:36:54.273 [2024-12-15 05:37:07.894734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.273 [2024-12-15 05:37:07.894836] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.273 [2024-12-15 05:37:07.894849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.273 [2024-12-15 05:37:07.894855] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.273 [2024-12-15 05:37:07.894861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.273 [2024-12-15 05:37:07.894875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.273 qpair failed and we were unable to recover it. 00:36:54.273 [2024-12-15 05:37:07.904799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.273 [2024-12-15 05:37:07.904852] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.273 [2024-12-15 05:37:07.904865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.273 [2024-12-15 05:37:07.904872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.273 [2024-12-15 05:37:07.904877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.273 [2024-12-15 05:37:07.904892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.273 qpair failed and we were unable to recover it. 00:36:54.273 [2024-12-15 05:37:07.914832] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.273 [2024-12-15 05:37:07.914909] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.273 [2024-12-15 05:37:07.914923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.273 [2024-12-15 05:37:07.914929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.273 [2024-12-15 05:37:07.914935] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.273 [2024-12-15 05:37:07.914950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.273 qpair failed and we were unable to recover it. 00:36:54.273 [2024-12-15 05:37:07.924837] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.273 [2024-12-15 05:37:07.924894] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.273 [2024-12-15 05:37:07.924907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.273 [2024-12-15 05:37:07.924914] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.273 [2024-12-15 05:37:07.924920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.274 [2024-12-15 05:37:07.924934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.274 qpair failed and we were unable to recover it. 00:36:54.274 [2024-12-15 05:37:07.934890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.274 [2024-12-15 05:37:07.934938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.274 [2024-12-15 05:37:07.934954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.274 [2024-12-15 05:37:07.934961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.274 [2024-12-15 05:37:07.934966] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.274 [2024-12-15 05:37:07.934981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.274 qpair failed and we were unable to recover it. 00:36:54.274 [2024-12-15 05:37:07.944857] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.274 [2024-12-15 05:37:07.944911] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.274 [2024-12-15 05:37:07.944924] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.274 [2024-12-15 05:37:07.944931] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.274 [2024-12-15 05:37:07.944937] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.274 [2024-12-15 05:37:07.944952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.274 qpair failed and we were unable to recover it. 00:36:54.274 [2024-12-15 05:37:07.954999] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.274 [2024-12-15 05:37:07.955106] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.274 [2024-12-15 05:37:07.955119] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.274 [2024-12-15 05:37:07.955125] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.274 [2024-12-15 05:37:07.955132] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.274 [2024-12-15 05:37:07.955146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.274 qpair failed and we were unable to recover it. 00:36:54.534 [2024-12-15 05:37:07.964958] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.534 [2024-12-15 05:37:07.965023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.534 [2024-12-15 05:37:07.965036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.534 [2024-12-15 05:37:07.965043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.534 [2024-12-15 05:37:07.965049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.534 [2024-12-15 05:37:07.965064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.534 qpair failed and we were unable to recover it. 00:36:54.534 [2024-12-15 05:37:07.974973] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.534 [2024-12-15 05:37:07.975027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.534 [2024-12-15 05:37:07.975040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.534 [2024-12-15 05:37:07.975050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.534 [2024-12-15 05:37:07.975056] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.534 [2024-12-15 05:37:07.975071] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.534 qpair failed and we were unable to recover it. 00:36:54.534 [2024-12-15 05:37:07.984997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.534 [2024-12-15 05:37:07.985048] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.534 [2024-12-15 05:37:07.985060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.534 [2024-12-15 05:37:07.985066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.534 [2024-12-15 05:37:07.985072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.534 [2024-12-15 05:37:07.985086] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.534 qpair failed and we were unable to recover it. 00:36:54.534 [2024-12-15 05:37:07.995049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.534 [2024-12-15 05:37:07.995115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.534 [2024-12-15 05:37:07.995128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.534 [2024-12-15 05:37:07.995134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.534 [2024-12-15 05:37:07.995139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.534 [2024-12-15 05:37:07.995153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.534 qpair failed and we were unable to recover it. 00:36:54.534 [2024-12-15 05:37:08.005070] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.534 [2024-12-15 05:37:08.005128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.005141] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.535 [2024-12-15 05:37:08.005147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.535 [2024-12-15 05:37:08.005153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.535 [2024-12-15 05:37:08.005168] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.535 qpair failed and we were unable to recover it. 00:36:54.535 [2024-12-15 05:37:08.015089] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.535 [2024-12-15 05:37:08.015140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.015152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.535 [2024-12-15 05:37:08.015159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.535 [2024-12-15 05:37:08.015165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.535 [2024-12-15 05:37:08.015186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.535 qpair failed and we were unable to recover it. 00:36:54.535 [2024-12-15 05:37:08.025109] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.535 [2024-12-15 05:37:08.025160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.025172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.535 [2024-12-15 05:37:08.025179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.535 [2024-12-15 05:37:08.025184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.535 [2024-12-15 05:37:08.025199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.535 qpair failed and we were unable to recover it. 00:36:54.535 [2024-12-15 05:37:08.035120] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.535 [2024-12-15 05:37:08.035214] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.035227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.535 [2024-12-15 05:37:08.035234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.535 [2024-12-15 05:37:08.035239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.535 [2024-12-15 05:37:08.035253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.535 qpair failed and we were unable to recover it. 00:36:54.535 [2024-12-15 05:37:08.045182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.535 [2024-12-15 05:37:08.045255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.045268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.535 [2024-12-15 05:37:08.045274] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.535 [2024-12-15 05:37:08.045280] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.535 [2024-12-15 05:37:08.045294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.535 qpair failed and we were unable to recover it. 00:36:54.535 [2024-12-15 05:37:08.055217] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.535 [2024-12-15 05:37:08.055299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.055310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.535 [2024-12-15 05:37:08.055317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.535 [2024-12-15 05:37:08.055322] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.535 [2024-12-15 05:37:08.055337] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.535 qpair failed and we were unable to recover it. 00:36:54.535 [2024-12-15 05:37:08.065235] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.535 [2024-12-15 05:37:08.065312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.065325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.535 [2024-12-15 05:37:08.065331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.535 [2024-12-15 05:37:08.065337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.535 [2024-12-15 05:37:08.065351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.535 qpair failed and we were unable to recover it. 00:36:54.535 [2024-12-15 05:37:08.075266] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.535 [2024-12-15 05:37:08.075324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.075337] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.535 [2024-12-15 05:37:08.075343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.535 [2024-12-15 05:37:08.075349] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.535 [2024-12-15 05:37:08.075363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.535 qpair failed and we were unable to recover it. 00:36:54.535 [2024-12-15 05:37:08.085246] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.535 [2024-12-15 05:37:08.085345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.085358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.535 [2024-12-15 05:37:08.085364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.535 [2024-12-15 05:37:08.085369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.535 [2024-12-15 05:37:08.085383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.535 qpair failed and we were unable to recover it. 00:36:54.535 [2024-12-15 05:37:08.095317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.535 [2024-12-15 05:37:08.095371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.095383] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.535 [2024-12-15 05:37:08.095390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.535 [2024-12-15 05:37:08.095396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.535 [2024-12-15 05:37:08.095410] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.535 qpair failed and we were unable to recover it. 00:36:54.535 [2024-12-15 05:37:08.105349] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.535 [2024-12-15 05:37:08.105431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.105444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.535 [2024-12-15 05:37:08.105454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.535 [2024-12-15 05:37:08.105459] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.535 [2024-12-15 05:37:08.105473] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.535 qpair failed and we were unable to recover it. 00:36:54.535 [2024-12-15 05:37:08.115401] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.535 [2024-12-15 05:37:08.115454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.115467] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.535 [2024-12-15 05:37:08.115473] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.535 [2024-12-15 05:37:08.115479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.535 [2024-12-15 05:37:08.115493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.535 qpair failed and we were unable to recover it. 00:36:54.535 [2024-12-15 05:37:08.125414] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.535 [2024-12-15 05:37:08.125509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.125521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.535 [2024-12-15 05:37:08.125527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.535 [2024-12-15 05:37:08.125533] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.535 [2024-12-15 05:37:08.125547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.535 qpair failed and we were unable to recover it. 00:36:54.535 [2024-12-15 05:37:08.135430] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.535 [2024-12-15 05:37:08.135483] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.535 [2024-12-15 05:37:08.135496] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.536 [2024-12-15 05:37:08.135502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.536 [2024-12-15 05:37:08.135508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.536 [2024-12-15 05:37:08.135523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.536 qpair failed and we were unable to recover it. 00:36:54.536 [2024-12-15 05:37:08.145455] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.536 [2024-12-15 05:37:08.145532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.536 [2024-12-15 05:37:08.145544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.536 [2024-12-15 05:37:08.145551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.536 [2024-12-15 05:37:08.145556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.536 [2024-12-15 05:37:08.145574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.536 qpair failed and we were unable to recover it. 00:36:54.536 [2024-12-15 05:37:08.155504] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.536 [2024-12-15 05:37:08.155558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.536 [2024-12-15 05:37:08.155571] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.536 [2024-12-15 05:37:08.155577] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.536 [2024-12-15 05:37:08.155583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.536 [2024-12-15 05:37:08.155598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.536 qpair failed and we were unable to recover it. 00:36:54.536 [2024-12-15 05:37:08.165461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.536 [2024-12-15 05:37:08.165518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.536 [2024-12-15 05:37:08.165530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.536 [2024-12-15 05:37:08.165537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.536 [2024-12-15 05:37:08.165543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.536 [2024-12-15 05:37:08.165556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.536 qpair failed and we were unable to recover it. 00:36:54.536 [2024-12-15 05:37:08.175548] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.536 [2024-12-15 05:37:08.175599] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.536 [2024-12-15 05:37:08.175612] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.536 [2024-12-15 05:37:08.175618] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.536 [2024-12-15 05:37:08.175624] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.536 [2024-12-15 05:37:08.175639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.536 qpair failed and we were unable to recover it. 00:36:54.536 [2024-12-15 05:37:08.185584] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.536 [2024-12-15 05:37:08.185653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.536 [2024-12-15 05:37:08.185665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.536 [2024-12-15 05:37:08.185671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.536 [2024-12-15 05:37:08.185677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.536 [2024-12-15 05:37:08.185692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.536 qpair failed and we were unable to recover it. 00:36:54.536 [2024-12-15 05:37:08.195643] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.536 [2024-12-15 05:37:08.195697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.536 [2024-12-15 05:37:08.195710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.536 [2024-12-15 05:37:08.195716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.536 [2024-12-15 05:37:08.195723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.536 [2024-12-15 05:37:08.195736] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.536 qpair failed and we were unable to recover it. 00:36:54.536 [2024-12-15 05:37:08.205675] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.536 [2024-12-15 05:37:08.205730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.536 [2024-12-15 05:37:08.205743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.536 [2024-12-15 05:37:08.205750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.536 [2024-12-15 05:37:08.205756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.536 [2024-12-15 05:37:08.205770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.536 qpair failed and we were unable to recover it. 00:36:54.536 [2024-12-15 05:37:08.215660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.536 [2024-12-15 05:37:08.215717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.536 [2024-12-15 05:37:08.215731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.536 [2024-12-15 05:37:08.215737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.536 [2024-12-15 05:37:08.215744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.536 [2024-12-15 05:37:08.215758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.536 qpair failed and we were unable to recover it. 00:36:54.797 [2024-12-15 05:37:08.225685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.798 [2024-12-15 05:37:08.225770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.798 [2024-12-15 05:37:08.225783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.798 [2024-12-15 05:37:08.225789] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.798 [2024-12-15 05:37:08.225795] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.798 [2024-12-15 05:37:08.225809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.798 qpair failed and we were unable to recover it. 00:36:54.798 [2024-12-15 05:37:08.235791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.798 [2024-12-15 05:37:08.235847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.798 [2024-12-15 05:37:08.235864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.798 [2024-12-15 05:37:08.235870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.798 [2024-12-15 05:37:08.235876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.798 [2024-12-15 05:37:08.235890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.798 qpair failed and we were unable to recover it. 00:36:54.798 [2024-12-15 05:37:08.245746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.798 [2024-12-15 05:37:08.245793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.798 [2024-12-15 05:37:08.245807] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.798 [2024-12-15 05:37:08.245813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.798 [2024-12-15 05:37:08.245819] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.798 [2024-12-15 05:37:08.245833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.798 qpair failed and we were unable to recover it. 00:36:54.798 [2024-12-15 05:37:08.255764] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.798 [2024-12-15 05:37:08.255821] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.798 [2024-12-15 05:37:08.255833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.798 [2024-12-15 05:37:08.255839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.798 [2024-12-15 05:37:08.255845] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.798 [2024-12-15 05:37:08.255859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.798 qpair failed and we were unable to recover it. 00:36:54.798 [2024-12-15 05:37:08.265800] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.798 [2024-12-15 05:37:08.265854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.798 [2024-12-15 05:37:08.265867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.798 [2024-12-15 05:37:08.265873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.798 [2024-12-15 05:37:08.265879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.798 [2024-12-15 05:37:08.265894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.798 qpair failed and we were unable to recover it. 00:36:54.798 [2024-12-15 05:37:08.275844] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.798 [2024-12-15 05:37:08.275925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.798 [2024-12-15 05:37:08.275939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.798 [2024-12-15 05:37:08.275946] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.798 [2024-12-15 05:37:08.275955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.798 [2024-12-15 05:37:08.275969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.798 qpair failed and we were unable to recover it. 00:36:54.798 [2024-12-15 05:37:08.285884] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.798 [2024-12-15 05:37:08.285940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.798 [2024-12-15 05:37:08.285953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.798 [2024-12-15 05:37:08.285959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.798 [2024-12-15 05:37:08.285965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.798 [2024-12-15 05:37:08.285979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.798 qpair failed and we were unable to recover it. 00:36:54.798 [2024-12-15 05:37:08.295889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.798 [2024-12-15 05:37:08.295943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.798 [2024-12-15 05:37:08.295956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.798 [2024-12-15 05:37:08.295962] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.798 [2024-12-15 05:37:08.295968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.798 [2024-12-15 05:37:08.295982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.798 qpair failed and we were unable to recover it. 00:36:54.798 [2024-12-15 05:37:08.305948] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.798 [2024-12-15 05:37:08.306007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.798 [2024-12-15 05:37:08.306020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.798 [2024-12-15 05:37:08.306026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.798 [2024-12-15 05:37:08.306032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.798 [2024-12-15 05:37:08.306047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.798 qpair failed and we were unable to recover it. 00:36:54.798 [2024-12-15 05:37:08.315961] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.798 [2024-12-15 05:37:08.316024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.798 [2024-12-15 05:37:08.316036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.798 [2024-12-15 05:37:08.316043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.798 [2024-12-15 05:37:08.316049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.798 [2024-12-15 05:37:08.316063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.798 qpair failed and we were unable to recover it. 00:36:54.798 [2024-12-15 05:37:08.325969] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.798 [2024-12-15 05:37:08.326027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.798 [2024-12-15 05:37:08.326040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.798 [2024-12-15 05:37:08.326047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.798 [2024-12-15 05:37:08.326053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.798 [2024-12-15 05:37:08.326068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.798 qpair failed and we were unable to recover it. 00:36:54.798 [2024-12-15 05:37:08.335997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.798 [2024-12-15 05:37:08.336049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.798 [2024-12-15 05:37:08.336062] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.798 [2024-12-15 05:37:08.336069] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.798 [2024-12-15 05:37:08.336075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.798 [2024-12-15 05:37:08.336089] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.798 qpair failed and we were unable to recover it. 00:36:54.798 [2024-12-15 05:37:08.346027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.798 [2024-12-15 05:37:08.346081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.798 [2024-12-15 05:37:08.346094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.798 [2024-12-15 05:37:08.346100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.798 [2024-12-15 05:37:08.346106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.798 [2024-12-15 05:37:08.346121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.798 qpair failed and we were unable to recover it. 00:36:54.799 [2024-12-15 05:37:08.356081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.799 [2024-12-15 05:37:08.356134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.799 [2024-12-15 05:37:08.356147] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.799 [2024-12-15 05:37:08.356153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.799 [2024-12-15 05:37:08.356159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.799 [2024-12-15 05:37:08.356174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.799 qpair failed and we were unable to recover it. 00:36:54.799 [2024-12-15 05:37:08.366096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.799 [2024-12-15 05:37:08.366147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.799 [2024-12-15 05:37:08.366163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.799 [2024-12-15 05:37:08.366169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.799 [2024-12-15 05:37:08.366175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.799 [2024-12-15 05:37:08.366190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.799 qpair failed and we were unable to recover it. 00:36:54.799 [2024-12-15 05:37:08.376200] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.799 [2024-12-15 05:37:08.376262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.799 [2024-12-15 05:37:08.376274] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.799 [2024-12-15 05:37:08.376281] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.799 [2024-12-15 05:37:08.376286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.799 [2024-12-15 05:37:08.376300] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.799 qpair failed and we were unable to recover it. 00:36:54.799 [2024-12-15 05:37:08.386192] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.799 [2024-12-15 05:37:08.386301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.799 [2024-12-15 05:37:08.386312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.799 [2024-12-15 05:37:08.386319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.799 [2024-12-15 05:37:08.386325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.799 [2024-12-15 05:37:08.386339] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.799 qpair failed and we were unable to recover it. 00:36:54.799 [2024-12-15 05:37:08.396214] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.799 [2024-12-15 05:37:08.396269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.799 [2024-12-15 05:37:08.396282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.799 [2024-12-15 05:37:08.396288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.799 [2024-12-15 05:37:08.396294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.799 [2024-12-15 05:37:08.396308] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.799 qpair failed and we were unable to recover it. 00:36:54.799 [2024-12-15 05:37:08.406251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.799 [2024-12-15 05:37:08.406325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.799 [2024-12-15 05:37:08.406338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.799 [2024-12-15 05:37:08.406344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.799 [2024-12-15 05:37:08.406353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.799 [2024-12-15 05:37:08.406367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.799 qpair failed and we were unable to recover it. 00:36:54.799 [2024-12-15 05:37:08.416259] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.799 [2024-12-15 05:37:08.416312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.799 [2024-12-15 05:37:08.416324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.799 [2024-12-15 05:37:08.416330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.799 [2024-12-15 05:37:08.416336] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.799 [2024-12-15 05:37:08.416351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.799 qpair failed and we were unable to recover it. 00:36:54.799 [2024-12-15 05:37:08.426256] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.799 [2024-12-15 05:37:08.426307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.799 [2024-12-15 05:37:08.426320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.799 [2024-12-15 05:37:08.426327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.799 [2024-12-15 05:37:08.426333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.799 [2024-12-15 05:37:08.426347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.799 qpair failed and we were unable to recover it. 00:36:54.799 [2024-12-15 05:37:08.436307] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.799 [2024-12-15 05:37:08.436362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.799 [2024-12-15 05:37:08.436375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.799 [2024-12-15 05:37:08.436381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.799 [2024-12-15 05:37:08.436387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.799 [2024-12-15 05:37:08.436402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.799 qpair failed and we were unable to recover it. 00:36:54.799 [2024-12-15 05:37:08.446340] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.799 [2024-12-15 05:37:08.446396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.799 [2024-12-15 05:37:08.446408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.799 [2024-12-15 05:37:08.446414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.799 [2024-12-15 05:37:08.446420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.799 [2024-12-15 05:37:08.446435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.799 qpair failed and we were unable to recover it. 00:36:54.799 [2024-12-15 05:37:08.456352] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.799 [2024-12-15 05:37:08.456406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.799 [2024-12-15 05:37:08.456418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.799 [2024-12-15 05:37:08.456424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.799 [2024-12-15 05:37:08.456431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.799 [2024-12-15 05:37:08.456445] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.799 qpair failed and we were unable to recover it. 00:36:54.799 [2024-12-15 05:37:08.466377] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.799 [2024-12-15 05:37:08.466476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.799 [2024-12-15 05:37:08.466491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.799 [2024-12-15 05:37:08.466498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.799 [2024-12-15 05:37:08.466504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.799 [2024-12-15 05:37:08.466519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.799 qpair failed and we were unable to recover it. 00:36:54.799 [2024-12-15 05:37:08.476420] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:54.799 [2024-12-15 05:37:08.476519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:54.799 [2024-12-15 05:37:08.476531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:54.799 [2024-12-15 05:37:08.476538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:54.799 [2024-12-15 05:37:08.476544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:54.799 [2024-12-15 05:37:08.476558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:54.799 qpair failed and we were unable to recover it. 00:36:55.060 [2024-12-15 05:37:08.486441] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.060 [2024-12-15 05:37:08.486509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.060 [2024-12-15 05:37:08.486522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.060 [2024-12-15 05:37:08.486529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.060 [2024-12-15 05:37:08.486535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.060 [2024-12-15 05:37:08.486549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.060 qpair failed and we were unable to recover it. 00:36:55.060 [2024-12-15 05:37:08.496505] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.060 [2024-12-15 05:37:08.496572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.060 [2024-12-15 05:37:08.496588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.060 [2024-12-15 05:37:08.496595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.060 [2024-12-15 05:37:08.496600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.060 [2024-12-15 05:37:08.496615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.060 qpair failed and we were unable to recover it. 00:36:55.060 [2024-12-15 05:37:08.506525] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.060 [2024-12-15 05:37:08.506580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.060 [2024-12-15 05:37:08.506592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.060 [2024-12-15 05:37:08.506598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.060 [2024-12-15 05:37:08.506604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.060 [2024-12-15 05:37:08.506618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.060 qpair failed and we were unable to recover it. 00:36:55.060 [2024-12-15 05:37:08.516529] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.060 [2024-12-15 05:37:08.516593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.060 [2024-12-15 05:37:08.516606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.060 [2024-12-15 05:37:08.516612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.060 [2024-12-15 05:37:08.516618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.060 [2024-12-15 05:37:08.516632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.060 qpair failed and we were unable to recover it. 00:36:55.060 [2024-12-15 05:37:08.526568] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.060 [2024-12-15 05:37:08.526633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.060 [2024-12-15 05:37:08.526646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.060 [2024-12-15 05:37:08.526652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.526658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.061 [2024-12-15 05:37:08.526672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.061 qpair failed and we were unable to recover it. 00:36:55.061 [2024-12-15 05:37:08.536562] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.061 [2024-12-15 05:37:08.536646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.061 [2024-12-15 05:37:08.536658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.061 [2024-12-15 05:37:08.536668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.536674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.061 [2024-12-15 05:37:08.536688] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.061 qpair failed and we were unable to recover it. 00:36:55.061 [2024-12-15 05:37:08.546600] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.061 [2024-12-15 05:37:08.546655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.061 [2024-12-15 05:37:08.546667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.061 [2024-12-15 05:37:08.546673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.546679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.061 [2024-12-15 05:37:08.546693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.061 qpair failed and we were unable to recover it. 00:36:55.061 [2024-12-15 05:37:08.556646] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.061 [2024-12-15 05:37:08.556701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.061 [2024-12-15 05:37:08.556714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.061 [2024-12-15 05:37:08.556720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.556726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.061 [2024-12-15 05:37:08.556740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.061 qpair failed and we were unable to recover it. 00:36:55.061 [2024-12-15 05:37:08.566686] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.061 [2024-12-15 05:37:08.566750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.061 [2024-12-15 05:37:08.566773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.061 [2024-12-15 05:37:08.566780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.566786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.061 [2024-12-15 05:37:08.566805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.061 qpair failed and we were unable to recover it. 00:36:55.061 [2024-12-15 05:37:08.576716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.061 [2024-12-15 05:37:08.576768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.061 [2024-12-15 05:37:08.576781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.061 [2024-12-15 05:37:08.576787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.576793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.061 [2024-12-15 05:37:08.576813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.061 qpair failed and we were unable to recover it. 00:36:55.061 [2024-12-15 05:37:08.586738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.061 [2024-12-15 05:37:08.586820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.061 [2024-12-15 05:37:08.586833] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.061 [2024-12-15 05:37:08.586839] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.586844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.061 [2024-12-15 05:37:08.586859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.061 qpair failed and we were unable to recover it. 00:36:55.061 [2024-12-15 05:37:08.596755] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.061 [2024-12-15 05:37:08.596810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.061 [2024-12-15 05:37:08.596822] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.061 [2024-12-15 05:37:08.596828] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.596834] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.061 [2024-12-15 05:37:08.596849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.061 qpair failed and we were unable to recover it. 00:36:55.061 [2024-12-15 05:37:08.606802] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.061 [2024-12-15 05:37:08.606879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.061 [2024-12-15 05:37:08.606891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.061 [2024-12-15 05:37:08.606897] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.606903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.061 [2024-12-15 05:37:08.606917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.061 qpair failed and we were unable to recover it. 00:36:55.061 [2024-12-15 05:37:08.616799] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.061 [2024-12-15 05:37:08.616851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.061 [2024-12-15 05:37:08.616863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.061 [2024-12-15 05:37:08.616869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.616875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.061 [2024-12-15 05:37:08.616890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.061 qpair failed and we were unable to recover it. 00:36:55.061 [2024-12-15 05:37:08.626821] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.061 [2024-12-15 05:37:08.626878] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.061 [2024-12-15 05:37:08.626891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.061 [2024-12-15 05:37:08.626898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.626904] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.061 [2024-12-15 05:37:08.626918] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.061 qpair failed and we were unable to recover it. 00:36:55.061 [2024-12-15 05:37:08.636873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.061 [2024-12-15 05:37:08.636928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.061 [2024-12-15 05:37:08.636941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.061 [2024-12-15 05:37:08.636948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.636953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.061 [2024-12-15 05:37:08.636968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.061 qpair failed and we were unable to recover it. 00:36:55.061 [2024-12-15 05:37:08.646828] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.061 [2024-12-15 05:37:08.646881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.061 [2024-12-15 05:37:08.646893] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.061 [2024-12-15 05:37:08.646900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.646906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.061 [2024-12-15 05:37:08.646919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.061 qpair failed and we were unable to recover it. 00:36:55.061 [2024-12-15 05:37:08.656929] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.061 [2024-12-15 05:37:08.657008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.061 [2024-12-15 05:37:08.657021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.061 [2024-12-15 05:37:08.657028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.061 [2024-12-15 05:37:08.657033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.062 [2024-12-15 05:37:08.657048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.062 qpair failed and we were unable to recover it. 00:36:55.062 [2024-12-15 05:37:08.666942] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.062 [2024-12-15 05:37:08.667002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.062 [2024-12-15 05:37:08.667015] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.062 [2024-12-15 05:37:08.667024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.062 [2024-12-15 05:37:08.667030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.062 [2024-12-15 05:37:08.667044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.062 qpair failed and we were unable to recover it. 00:36:55.062 [2024-12-15 05:37:08.676914] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.062 [2024-12-15 05:37:08.676970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.062 [2024-12-15 05:37:08.676984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.062 [2024-12-15 05:37:08.676990] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.062 [2024-12-15 05:37:08.677000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.062 [2024-12-15 05:37:08.677015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.062 qpair failed and we were unable to recover it. 00:36:55.062 [2024-12-15 05:37:08.687037] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.062 [2024-12-15 05:37:08.687104] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.062 [2024-12-15 05:37:08.687117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.062 [2024-12-15 05:37:08.687123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.062 [2024-12-15 05:37:08.687129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.062 [2024-12-15 05:37:08.687143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.062 qpair failed and we were unable to recover it. 00:36:55.062 [2024-12-15 05:37:08.697027] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.062 [2024-12-15 05:37:08.697083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.062 [2024-12-15 05:37:08.697096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.062 [2024-12-15 05:37:08.697102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.062 [2024-12-15 05:37:08.697108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.062 [2024-12-15 05:37:08.697122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.062 qpair failed and we were unable to recover it. 00:36:55.062 [2024-12-15 05:37:08.707043] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.062 [2024-12-15 05:37:08.707100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.062 [2024-12-15 05:37:08.707113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.062 [2024-12-15 05:37:08.707120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.062 [2024-12-15 05:37:08.707126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.062 [2024-12-15 05:37:08.707144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.062 qpair failed and we were unable to recover it. 00:36:55.062 [2024-12-15 05:37:08.717046] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.062 [2024-12-15 05:37:08.717102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.062 [2024-12-15 05:37:08.717117] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.062 [2024-12-15 05:37:08.717124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.062 [2024-12-15 05:37:08.717130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.062 [2024-12-15 05:37:08.717146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.062 qpair failed and we were unable to recover it. 00:36:55.062 [2024-12-15 05:37:08.727164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.062 [2024-12-15 05:37:08.727235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.062 [2024-12-15 05:37:08.727248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.062 [2024-12-15 05:37:08.727255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.062 [2024-12-15 05:37:08.727261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.062 [2024-12-15 05:37:08.727276] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.062 qpair failed and we were unable to recover it. 00:36:55.062 [2024-12-15 05:37:08.737160] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.062 [2024-12-15 05:37:08.737208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.062 [2024-12-15 05:37:08.737221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.062 [2024-12-15 05:37:08.737227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.062 [2024-12-15 05:37:08.737233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.062 [2024-12-15 05:37:08.737248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.062 qpair failed and we were unable to recover it. 00:36:55.323 [2024-12-15 05:37:08.747111] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.323 [2024-12-15 05:37:08.747167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.323 [2024-12-15 05:37:08.747179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.323 [2024-12-15 05:37:08.747186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.323 [2024-12-15 05:37:08.747192] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.323 [2024-12-15 05:37:08.747207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.323 qpair failed and we were unable to recover it. 00:36:55.323 [2024-12-15 05:37:08.757176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.323 [2024-12-15 05:37:08.757256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.323 [2024-12-15 05:37:08.757269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.323 [2024-12-15 05:37:08.757276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.323 [2024-12-15 05:37:08.757282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.323 [2024-12-15 05:37:08.757296] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.323 qpair failed and we were unable to recover it. 00:36:55.323 [2024-12-15 05:37:08.767176] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.323 [2024-12-15 05:37:08.767234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.323 [2024-12-15 05:37:08.767247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.323 [2024-12-15 05:37:08.767254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.323 [2024-12-15 05:37:08.767259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.323 [2024-12-15 05:37:08.767274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.323 qpair failed and we were unable to recover it. 00:36:55.323 [2024-12-15 05:37:08.777251] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.323 [2024-12-15 05:37:08.777336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.323 [2024-12-15 05:37:08.777348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.323 [2024-12-15 05:37:08.777354] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.323 [2024-12-15 05:37:08.777360] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.323 [2024-12-15 05:37:08.777375] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.323 qpair failed and we were unable to recover it. 00:36:55.323 [2024-12-15 05:37:08.787329] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.323 [2024-12-15 05:37:08.787413] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.323 [2024-12-15 05:37:08.787426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.323 [2024-12-15 05:37:08.787432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.323 [2024-12-15 05:37:08.787438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.323 [2024-12-15 05:37:08.787452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.323 qpair failed and we were unable to recover it. 00:36:55.323 [2024-12-15 05:37:08.797317] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.323 [2024-12-15 05:37:08.797373] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.323 [2024-12-15 05:37:08.797388] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.323 [2024-12-15 05:37:08.797394] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.323 [2024-12-15 05:37:08.797400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.323 [2024-12-15 05:37:08.797414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.323 qpair failed and we were unable to recover it. 00:36:55.323 [2024-12-15 05:37:08.807338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.323 [2024-12-15 05:37:08.807391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.323 [2024-12-15 05:37:08.807404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.323 [2024-12-15 05:37:08.807410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.323 [2024-12-15 05:37:08.807417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.323 [2024-12-15 05:37:08.807431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.323 qpair failed and we were unable to recover it. 00:36:55.323 [2024-12-15 05:37:08.817372] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.817425] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.817438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.324 [2024-12-15 05:37:08.817444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.324 [2024-12-15 05:37:08.817450] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.324 [2024-12-15 05:37:08.817464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.324 qpair failed and we were unable to recover it. 00:36:55.324 [2024-12-15 05:37:08.827405] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.827510] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.827524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.324 [2024-12-15 05:37:08.827530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.324 [2024-12-15 05:37:08.827536] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.324 [2024-12-15 05:37:08.827550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.324 qpair failed and we were unable to recover it. 00:36:55.324 [2024-12-15 05:37:08.837449] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.837504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.837517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.324 [2024-12-15 05:37:08.837523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.324 [2024-12-15 05:37:08.837532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.324 [2024-12-15 05:37:08.837547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.324 qpair failed and we were unable to recover it. 00:36:55.324 [2024-12-15 05:37:08.847461] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.847525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.847537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.324 [2024-12-15 05:37:08.847544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.324 [2024-12-15 05:37:08.847550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.324 [2024-12-15 05:37:08.847564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.324 qpair failed and we were unable to recover it. 00:36:55.324 [2024-12-15 05:37:08.857480] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.857575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.857588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.324 [2024-12-15 05:37:08.857594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.324 [2024-12-15 05:37:08.857600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.324 [2024-12-15 05:37:08.857615] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.324 qpair failed and we were unable to recover it. 00:36:55.324 [2024-12-15 05:37:08.867555] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.867617] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.867630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.324 [2024-12-15 05:37:08.867636] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.324 [2024-12-15 05:37:08.867642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.324 [2024-12-15 05:37:08.867656] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.324 qpair failed and we were unable to recover it. 00:36:55.324 [2024-12-15 05:37:08.877553] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.877609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.877621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.324 [2024-12-15 05:37:08.877627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.324 [2024-12-15 05:37:08.877634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.324 [2024-12-15 05:37:08.877648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.324 qpair failed and we were unable to recover it. 00:36:55.324 [2024-12-15 05:37:08.887538] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.887596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.887608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.324 [2024-12-15 05:37:08.887614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.324 [2024-12-15 05:37:08.887620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.324 [2024-12-15 05:37:08.887634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.324 qpair failed and we were unable to recover it. 00:36:55.324 [2024-12-15 05:37:08.897631] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.897681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.897694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.324 [2024-12-15 05:37:08.897700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.324 [2024-12-15 05:37:08.897705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.324 [2024-12-15 05:37:08.897720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.324 qpair failed and we were unable to recover it. 00:36:55.324 [2024-12-15 05:37:08.907660] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.907711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.907726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.324 [2024-12-15 05:37:08.907732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.324 [2024-12-15 05:37:08.907738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.324 [2024-12-15 05:37:08.907752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.324 qpair failed and we were unable to recover it. 00:36:55.324 [2024-12-15 05:37:08.917707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.917762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.917775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.324 [2024-12-15 05:37:08.917782] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.324 [2024-12-15 05:37:08.917788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.324 [2024-12-15 05:37:08.917801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.324 qpair failed and we were unable to recover it. 00:36:55.324 [2024-12-15 05:37:08.927760] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.927813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.927829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.324 [2024-12-15 05:37:08.927835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.324 [2024-12-15 05:37:08.927841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.324 [2024-12-15 05:37:08.927855] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.324 qpair failed and we were unable to recover it. 00:36:55.324 [2024-12-15 05:37:08.937750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.937803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.937816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.324 [2024-12-15 05:37:08.937823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.324 [2024-12-15 05:37:08.937829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.324 [2024-12-15 05:37:08.937843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.324 qpair failed and we were unable to recover it. 00:36:55.324 [2024-12-15 05:37:08.947775] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.324 [2024-12-15 05:37:08.947826] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.324 [2024-12-15 05:37:08.947839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.325 [2024-12-15 05:37:08.947846] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.325 [2024-12-15 05:37:08.947851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.325 [2024-12-15 05:37:08.947866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.325 qpair failed and we were unable to recover it. 00:36:55.325 [2024-12-15 05:37:08.957854] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.325 [2024-12-15 05:37:08.957913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.325 [2024-12-15 05:37:08.957926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.325 [2024-12-15 05:37:08.957932] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.325 [2024-12-15 05:37:08.957938] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.325 [2024-12-15 05:37:08.957953] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.325 qpair failed and we were unable to recover it. 00:36:55.325 [2024-12-15 05:37:08.967842] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.325 [2024-12-15 05:37:08.967896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.325 [2024-12-15 05:37:08.967910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.325 [2024-12-15 05:37:08.967916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.325 [2024-12-15 05:37:08.967925] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.325 [2024-12-15 05:37:08.967940] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.325 qpair failed and we were unable to recover it. 00:36:55.325 [2024-12-15 05:37:08.977791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.325 [2024-12-15 05:37:08.977861] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.325 [2024-12-15 05:37:08.977874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.325 [2024-12-15 05:37:08.977881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.325 [2024-12-15 05:37:08.977887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.325 [2024-12-15 05:37:08.977901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.325 qpair failed and we were unable to recover it. 00:36:55.325 [2024-12-15 05:37:08.987889] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.325 [2024-12-15 05:37:08.987958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.325 [2024-12-15 05:37:08.987970] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.325 [2024-12-15 05:37:08.987977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.325 [2024-12-15 05:37:08.987982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.325 [2024-12-15 05:37:08.988001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.325 qpair failed and we were unable to recover it. 00:36:55.325 [2024-12-15 05:37:08.997988] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.325 [2024-12-15 05:37:08.998052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.325 [2024-12-15 05:37:08.998065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.325 [2024-12-15 05:37:08.998071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.325 [2024-12-15 05:37:08.998077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.325 [2024-12-15 05:37:08.998092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.325 qpair failed and we were unable to recover it. 00:36:55.325 [2024-12-15 05:37:09.007895] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.325 [2024-12-15 05:37:09.007973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.325 [2024-12-15 05:37:09.007986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.325 [2024-12-15 05:37:09.007997] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.325 [2024-12-15 05:37:09.008004] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.325 [2024-12-15 05:37:09.008018] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.325 qpair failed and we were unable to recover it. 00:36:55.586 [2024-12-15 05:37:09.017984] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.586 [2024-12-15 05:37:09.018057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.586 [2024-12-15 05:37:09.018070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.586 [2024-12-15 05:37:09.018076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.586 [2024-12-15 05:37:09.018083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.586 [2024-12-15 05:37:09.018097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.586 qpair failed and we were unable to recover it. 00:36:55.586 [2024-12-15 05:37:09.027934] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.586 [2024-12-15 05:37:09.028029] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.586 [2024-12-15 05:37:09.028041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.586 [2024-12-15 05:37:09.028048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.586 [2024-12-15 05:37:09.028053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.586 [2024-12-15 05:37:09.028068] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.586 qpair failed and we were unable to recover it. 00:36:55.586 [2024-12-15 05:37:09.038055] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.586 [2024-12-15 05:37:09.038110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.586 [2024-12-15 05:37:09.038124] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.586 [2024-12-15 05:37:09.038130] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.586 [2024-12-15 05:37:09.038136] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.586 [2024-12-15 05:37:09.038151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.586 qpair failed and we were unable to recover it. 00:36:55.586 [2024-12-15 05:37:09.048072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.586 [2024-12-15 05:37:09.048125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.586 [2024-12-15 05:37:09.048138] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.586 [2024-12-15 05:37:09.048144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.586 [2024-12-15 05:37:09.048150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.586 [2024-12-15 05:37:09.048165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.586 qpair failed and we were unable to recover it. 00:36:55.586 [2024-12-15 05:37:09.058096] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.586 [2024-12-15 05:37:09.058146] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.586 [2024-12-15 05:37:09.058162] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.586 [2024-12-15 05:37:09.058169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.586 [2024-12-15 05:37:09.058175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.586 [2024-12-15 05:37:09.058189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.586 qpair failed and we were unable to recover it. 00:36:55.586 [2024-12-15 05:37:09.068149] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.586 [2024-12-15 05:37:09.068204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.586 [2024-12-15 05:37:09.068217] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.586 [2024-12-15 05:37:09.068223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.586 [2024-12-15 05:37:09.068229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.586 [2024-12-15 05:37:09.068244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.586 qpair failed and we were unable to recover it. 00:36:55.586 [2024-12-15 05:37:09.078244] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.586 [2024-12-15 05:37:09.078325] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.586 [2024-12-15 05:37:09.078338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.586 [2024-12-15 05:37:09.078345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.586 [2024-12-15 05:37:09.078351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.586 [2024-12-15 05:37:09.078365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.586 qpair failed and we were unable to recover it. 00:36:55.586 [2024-12-15 05:37:09.088173] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.586 [2024-12-15 05:37:09.088228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.586 [2024-12-15 05:37:09.088240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.586 [2024-12-15 05:37:09.088247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.586 [2024-12-15 05:37:09.088253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.586 [2024-12-15 05:37:09.088268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.586 qpair failed and we were unable to recover it. 00:36:55.586 [2024-12-15 05:37:09.098204] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.586 [2024-12-15 05:37:09.098256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.586 [2024-12-15 05:37:09.098269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.586 [2024-12-15 05:37:09.098278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.586 [2024-12-15 05:37:09.098284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.586 [2024-12-15 05:37:09.098298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.586 qpair failed and we were unable to recover it. 00:36:55.586 [2024-12-15 05:37:09.108227] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.586 [2024-12-15 05:37:09.108279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.586 [2024-12-15 05:37:09.108292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.586 [2024-12-15 05:37:09.108298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.586 [2024-12-15 05:37:09.108304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.587 [2024-12-15 05:37:09.108318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.587 qpair failed and we were unable to recover it. 00:36:55.587 [2024-12-15 05:37:09.118267] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.587 [2024-12-15 05:37:09.118322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.587 [2024-12-15 05:37:09.118335] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.587 [2024-12-15 05:37:09.118341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.587 [2024-12-15 05:37:09.118347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.587 [2024-12-15 05:37:09.118361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.587 qpair failed and we were unable to recover it. 00:36:55.587 [2024-12-15 05:37:09.128282] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.587 [2024-12-15 05:37:09.128362] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.587 [2024-12-15 05:37:09.128375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.587 [2024-12-15 05:37:09.128382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.587 [2024-12-15 05:37:09.128388] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.587 [2024-12-15 05:37:09.128402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.587 qpair failed and we were unable to recover it. 00:36:55.587 [2024-12-15 05:37:09.138281] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.587 [2024-12-15 05:37:09.138354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.587 [2024-12-15 05:37:09.138367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.587 [2024-12-15 05:37:09.138373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.587 [2024-12-15 05:37:09.138379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.587 [2024-12-15 05:37:09.138396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.587 qpair failed and we were unable to recover it. 00:36:55.587 [2024-12-15 05:37:09.148331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.587 [2024-12-15 05:37:09.148407] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.587 [2024-12-15 05:37:09.148420] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.587 [2024-12-15 05:37:09.148426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.587 [2024-12-15 05:37:09.148432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.587 [2024-12-15 05:37:09.148447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.587 qpair failed and we were unable to recover it. 00:36:55.587 [2024-12-15 05:37:09.158413] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.587 [2024-12-15 05:37:09.158472] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.587 [2024-12-15 05:37:09.158485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.587 [2024-12-15 05:37:09.158492] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.587 [2024-12-15 05:37:09.158498] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.587 [2024-12-15 05:37:09.158512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.587 qpair failed and we were unable to recover it. 00:36:55.587 [2024-12-15 05:37:09.168447] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.587 [2024-12-15 05:37:09.168512] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.587 [2024-12-15 05:37:09.168525] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.587 [2024-12-15 05:37:09.168531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.587 [2024-12-15 05:37:09.168537] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.587 [2024-12-15 05:37:09.168551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.587 qpair failed and we were unable to recover it. 00:36:55.587 [2024-12-15 05:37:09.178468] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.587 [2024-12-15 05:37:09.178571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.587 [2024-12-15 05:37:09.178583] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.587 [2024-12-15 05:37:09.178590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.587 [2024-12-15 05:37:09.178595] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.587 [2024-12-15 05:37:09.178610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.587 qpair failed and we were unable to recover it. 00:36:55.587 [2024-12-15 05:37:09.188440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.587 [2024-12-15 05:37:09.188492] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.587 [2024-12-15 05:37:09.188504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.587 [2024-12-15 05:37:09.188511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.587 [2024-12-15 05:37:09.188516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.587 [2024-12-15 05:37:09.188531] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.587 qpair failed and we were unable to recover it. 00:36:55.587 [2024-12-15 05:37:09.198520] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.587 [2024-12-15 05:37:09.198575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.587 [2024-12-15 05:37:09.198587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.587 [2024-12-15 05:37:09.198593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.587 [2024-12-15 05:37:09.198599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.587 [2024-12-15 05:37:09.198613] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.587 qpair failed and we were unable to recover it. 00:36:55.587 [2024-12-15 05:37:09.208524] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.587 [2024-12-15 05:37:09.208600] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.587 [2024-12-15 05:37:09.208614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.587 [2024-12-15 05:37:09.208620] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.587 [2024-12-15 05:37:09.208626] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.587 [2024-12-15 05:37:09.208641] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.587 qpair failed and we were unable to recover it. 00:36:55.587 [2024-12-15 05:37:09.218526] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.587 [2024-12-15 05:37:09.218579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.587 [2024-12-15 05:37:09.218594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.587 [2024-12-15 05:37:09.218601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.587 [2024-12-15 05:37:09.218606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.587 [2024-12-15 05:37:09.218622] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.587 qpair failed and we were unable to recover it. 00:36:55.587 [2024-12-15 05:37:09.228549] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.587 [2024-12-15 05:37:09.228602] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.587 [2024-12-15 05:37:09.228615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.587 [2024-12-15 05:37:09.228625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.587 [2024-12-15 05:37:09.228631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.587 [2024-12-15 05:37:09.228645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.587 qpair failed and we were unable to recover it. 00:36:55.587 [2024-12-15 05:37:09.238580] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.588 [2024-12-15 05:37:09.238636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.588 [2024-12-15 05:37:09.238648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.588 [2024-12-15 05:37:09.238655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.588 [2024-12-15 05:37:09.238661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.588 [2024-12-15 05:37:09.238675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.588 qpair failed and we were unable to recover it. 00:36:55.588 [2024-12-15 05:37:09.248634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.588 [2024-12-15 05:37:09.248701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.588 [2024-12-15 05:37:09.248714] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.588 [2024-12-15 05:37:09.248721] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.588 [2024-12-15 05:37:09.248727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.588 [2024-12-15 05:37:09.248741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.588 qpair failed and we were unable to recover it. 00:36:55.588 [2024-12-15 05:37:09.258627] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.588 [2024-12-15 05:37:09.258682] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.588 [2024-12-15 05:37:09.258694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.588 [2024-12-15 05:37:09.258700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.588 [2024-12-15 05:37:09.258706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.588 [2024-12-15 05:37:09.258720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.588 qpair failed and we were unable to recover it. 00:36:55.588 [2024-12-15 05:37:09.268712] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.588 [2024-12-15 05:37:09.268788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.588 [2024-12-15 05:37:09.268801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.588 [2024-12-15 05:37:09.268807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.588 [2024-12-15 05:37:09.268813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.588 [2024-12-15 05:37:09.268830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.588 qpair failed and we were unable to recover it. 00:36:55.848 [2024-12-15 05:37:09.278746] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.848 [2024-12-15 05:37:09.278823] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.848 [2024-12-15 05:37:09.278836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.848 [2024-12-15 05:37:09.278843] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.848 [2024-12-15 05:37:09.278849] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.848 [2024-12-15 05:37:09.278864] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.849 qpair failed and we were unable to recover it. 00:36:55.849 [2024-12-15 05:37:09.288713] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.849 [2024-12-15 05:37:09.288768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.849 [2024-12-15 05:37:09.288781] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.849 [2024-12-15 05:37:09.288787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.849 [2024-12-15 05:37:09.288793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.849 [2024-12-15 05:37:09.288808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.849 qpair failed and we were unable to recover it. 00:36:55.849 [2024-12-15 05:37:09.298738] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.849 [2024-12-15 05:37:09.298802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.849 [2024-12-15 05:37:09.298815] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.849 [2024-12-15 05:37:09.298822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.849 [2024-12-15 05:37:09.298827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.849 [2024-12-15 05:37:09.298842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.849 qpair failed and we were unable to recover it. 00:36:55.849 [2024-12-15 05:37:09.308789] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.849 [2024-12-15 05:37:09.308844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.849 [2024-12-15 05:37:09.308856] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.849 [2024-12-15 05:37:09.308863] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.849 [2024-12-15 05:37:09.308869] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.849 [2024-12-15 05:37:09.308883] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.849 qpair failed and we were unable to recover it. 00:36:55.849 [2024-12-15 05:37:09.318819] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.849 [2024-12-15 05:37:09.318877] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.849 [2024-12-15 05:37:09.318889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.849 [2024-12-15 05:37:09.318895] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.849 [2024-12-15 05:37:09.318902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.849 [2024-12-15 05:37:09.318916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.849 qpair failed and we were unable to recover it. 00:36:55.849 [2024-12-15 05:37:09.328849] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.849 [2024-12-15 05:37:09.328901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.849 [2024-12-15 05:37:09.328914] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.849 [2024-12-15 05:37:09.328920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.849 [2024-12-15 05:37:09.328926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.849 [2024-12-15 05:37:09.328941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.849 qpair failed and we were unable to recover it. 00:36:55.849 [2024-12-15 05:37:09.338856] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.849 [2024-12-15 05:37:09.338910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.849 [2024-12-15 05:37:09.338923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.849 [2024-12-15 05:37:09.338930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.849 [2024-12-15 05:37:09.338936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.849 [2024-12-15 05:37:09.338950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.849 qpair failed and we were unable to recover it. 00:36:55.849 [2024-12-15 05:37:09.348880] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.849 [2024-12-15 05:37:09.348926] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.849 [2024-12-15 05:37:09.348939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.849 [2024-12-15 05:37:09.348945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.849 [2024-12-15 05:37:09.348951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.849 [2024-12-15 05:37:09.348966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.849 qpair failed and we were unable to recover it. 00:36:55.849 [2024-12-15 05:37:09.358997] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.849 [2024-12-15 05:37:09.359097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.849 [2024-12-15 05:37:09.359113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.849 [2024-12-15 05:37:09.359120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.849 [2024-12-15 05:37:09.359125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.849 [2024-12-15 05:37:09.359140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.849 qpair failed and we were unable to recover it. 00:36:55.849 [2024-12-15 05:37:09.368946] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.849 [2024-12-15 05:37:09.369022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.849 [2024-12-15 05:37:09.369035] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.849 [2024-12-15 05:37:09.369041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.849 [2024-12-15 05:37:09.369047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.849 [2024-12-15 05:37:09.369062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.849 qpair failed and we were unable to recover it. 00:36:55.849 [2024-12-15 05:37:09.378971] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.849 [2024-12-15 05:37:09.379044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.849 [2024-12-15 05:37:09.379057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.849 [2024-12-15 05:37:09.379063] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.849 [2024-12-15 05:37:09.379069] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.849 [2024-12-15 05:37:09.379084] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.849 qpair failed and we were unable to recover it. 00:36:55.849 [2024-12-15 05:37:09.388976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.849 [2024-12-15 05:37:09.389035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.849 [2024-12-15 05:37:09.389048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.849 [2024-12-15 05:37:09.389054] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.849 [2024-12-15 05:37:09.389060] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.849 [2024-12-15 05:37:09.389075] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.849 qpair failed and we were unable to recover it. 00:36:55.849 [2024-12-15 05:37:09.399044] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.849 [2024-12-15 05:37:09.399100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.849 [2024-12-15 05:37:09.399113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.849 [2024-12-15 05:37:09.399120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.849 [2024-12-15 05:37:09.399129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.849 [2024-12-15 05:37:09.399143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.849 qpair failed and we were unable to recover it. 00:36:55.849 [2024-12-15 05:37:09.409053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.849 [2024-12-15 05:37:09.409110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.849 [2024-12-15 05:37:09.409122] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.849 [2024-12-15 05:37:09.409128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.849 [2024-12-15 05:37:09.409134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.849 [2024-12-15 05:37:09.409149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.850 qpair failed and we were unable to recover it. 00:36:55.850 [2024-12-15 05:37:09.419082] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.850 [2024-12-15 05:37:09.419131] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.850 [2024-12-15 05:37:09.419144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.850 [2024-12-15 05:37:09.419150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.850 [2024-12-15 05:37:09.419155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.850 [2024-12-15 05:37:09.419170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.850 qpair failed and we were unable to recover it. 00:36:55.850 [2024-12-15 05:37:09.429134] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.850 [2024-12-15 05:37:09.429191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.850 [2024-12-15 05:37:09.429204] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.850 [2024-12-15 05:37:09.429210] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.850 [2024-12-15 05:37:09.429216] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.850 [2024-12-15 05:37:09.429231] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.850 qpair failed and we were unable to recover it. 00:36:55.850 [2024-12-15 05:37:09.439164] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.850 [2024-12-15 05:37:09.439220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.850 [2024-12-15 05:37:09.439233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.850 [2024-12-15 05:37:09.439239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.850 [2024-12-15 05:37:09.439245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.850 [2024-12-15 05:37:09.439260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.850 qpair failed and we were unable to recover it. 00:36:55.850 [2024-12-15 05:37:09.449189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.850 [2024-12-15 05:37:09.449240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.850 [2024-12-15 05:37:09.449252] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.850 [2024-12-15 05:37:09.449259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.850 [2024-12-15 05:37:09.449264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.850 [2024-12-15 05:37:09.449278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.850 qpair failed and we were unable to recover it. 00:36:55.850 [2024-12-15 05:37:09.459213] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.850 [2024-12-15 05:37:09.459279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.850 [2024-12-15 05:37:09.459291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.850 [2024-12-15 05:37:09.459298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.850 [2024-12-15 05:37:09.459304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.850 [2024-12-15 05:37:09.459317] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.850 qpair failed and we were unable to recover it. 00:36:55.850 [2024-12-15 05:37:09.469238] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.850 [2024-12-15 05:37:09.469292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.850 [2024-12-15 05:37:09.469306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.850 [2024-12-15 05:37:09.469313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.850 [2024-12-15 05:37:09.469318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.850 [2024-12-15 05:37:09.469333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.850 qpair failed and we were unable to recover it. 00:36:55.850 [2024-12-15 05:37:09.479316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.850 [2024-12-15 05:37:09.479374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.850 [2024-12-15 05:37:09.479387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.850 [2024-12-15 05:37:09.479393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.850 [2024-12-15 05:37:09.479399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.850 [2024-12-15 05:37:09.479414] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.850 qpair failed and we were unable to recover it. 00:36:55.850 [2024-12-15 05:37:09.489302] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.850 [2024-12-15 05:37:09.489359] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.850 [2024-12-15 05:37:09.489374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.850 [2024-12-15 05:37:09.489381] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.850 [2024-12-15 05:37:09.489387] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.850 [2024-12-15 05:37:09.489402] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.850 qpair failed and we were unable to recover it. 00:36:55.850 [2024-12-15 05:37:09.499331] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.850 [2024-12-15 05:37:09.499389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.850 [2024-12-15 05:37:09.499401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.850 [2024-12-15 05:37:09.499408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.850 [2024-12-15 05:37:09.499414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.850 [2024-12-15 05:37:09.499429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.850 qpair failed and we were unable to recover it. 00:36:55.850 [2024-12-15 05:37:09.509396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.850 [2024-12-15 05:37:09.509451] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.850 [2024-12-15 05:37:09.509463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.850 [2024-12-15 05:37:09.509470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.850 [2024-12-15 05:37:09.509476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.850 [2024-12-15 05:37:09.509490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.850 qpair failed and we were unable to recover it. 00:36:55.850 [2024-12-15 05:37:09.519392] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.850 [2024-12-15 05:37:09.519447] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.850 [2024-12-15 05:37:09.519459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.850 [2024-12-15 05:37:09.519465] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.850 [2024-12-15 05:37:09.519471] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.850 [2024-12-15 05:37:09.519485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.850 qpair failed and we were unable to recover it. 00:36:55.850 [2024-12-15 05:37:09.529440] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:55.850 [2024-12-15 05:37:09.529504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:55.850 [2024-12-15 05:37:09.529517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:55.850 [2024-12-15 05:37:09.529524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:55.850 [2024-12-15 05:37:09.529532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:55.850 [2024-12-15 05:37:09.529547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:55.850 qpair failed and we were unable to recover it. 00:36:56.111 [2024-12-15 05:37:09.539438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.111 [2024-12-15 05:37:09.539525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.111 [2024-12-15 05:37:09.539539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.111 [2024-12-15 05:37:09.539545] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.111 [2024-12-15 05:37:09.539551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.111 [2024-12-15 05:37:09.539565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.111 qpair failed and we were unable to recover it. 00:36:56.111 [2024-12-15 05:37:09.549514] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.111 [2024-12-15 05:37:09.549561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.111 [2024-12-15 05:37:09.549574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.111 [2024-12-15 05:37:09.549580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.111 [2024-12-15 05:37:09.549586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.111 [2024-12-15 05:37:09.549600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.111 qpair failed and we were unable to recover it. 00:36:56.111 [2024-12-15 05:37:09.559437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.111 [2024-12-15 05:37:09.559518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.111 [2024-12-15 05:37:09.559530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.111 [2024-12-15 05:37:09.559537] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.111 [2024-12-15 05:37:09.559543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.111 [2024-12-15 05:37:09.559557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.111 qpair failed and we were unable to recover it. 00:36:56.111 [2024-12-15 05:37:09.569593] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.111 [2024-12-15 05:37:09.569685] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.111 [2024-12-15 05:37:09.569699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.111 [2024-12-15 05:37:09.569705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.111 [2024-12-15 05:37:09.569711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.111 [2024-12-15 05:37:09.569727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.111 qpair failed and we were unable to recover it. 00:36:56.111 [2024-12-15 05:37:09.579537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.111 [2024-12-15 05:37:09.579591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.579604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.112 [2024-12-15 05:37:09.579610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.112 [2024-12-15 05:37:09.579616] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.112 [2024-12-15 05:37:09.579630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.112 qpair failed and we were unable to recover it. 00:36:56.112 [2024-12-15 05:37:09.589598] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.112 [2024-12-15 05:37:09.589665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.589678] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.112 [2024-12-15 05:37:09.589684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.112 [2024-12-15 05:37:09.589690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.112 [2024-12-15 05:37:09.589705] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.112 qpair failed and we were unable to recover it. 00:36:56.112 [2024-12-15 05:37:09.599613] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.112 [2024-12-15 05:37:09.599689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.599701] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.112 [2024-12-15 05:37:09.599707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.112 [2024-12-15 05:37:09.599713] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.112 [2024-12-15 05:37:09.599727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.112 qpair failed and we were unable to recover it. 00:36:56.112 [2024-12-15 05:37:09.609628] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.112 [2024-12-15 05:37:09.609680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.609693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.112 [2024-12-15 05:37:09.609699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.112 [2024-12-15 05:37:09.609705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.112 [2024-12-15 05:37:09.609720] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.112 qpair failed and we were unable to recover it. 00:36:56.112 [2024-12-15 05:37:09.619663] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.112 [2024-12-15 05:37:09.619719] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.619731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.112 [2024-12-15 05:37:09.619738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.112 [2024-12-15 05:37:09.619744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.112 [2024-12-15 05:37:09.619757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.112 qpair failed and we were unable to recover it. 00:36:56.112 [2024-12-15 05:37:09.629741] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.112 [2024-12-15 05:37:09.629802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.629814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.112 [2024-12-15 05:37:09.629821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.112 [2024-12-15 05:37:09.629826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.112 [2024-12-15 05:37:09.629840] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.112 qpair failed and we were unable to recover it. 00:36:56.112 [2024-12-15 05:37:09.639730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.112 [2024-12-15 05:37:09.639787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.639800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.112 [2024-12-15 05:37:09.639806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.112 [2024-12-15 05:37:09.639812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.112 [2024-12-15 05:37:09.639826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.112 qpair failed and we were unable to recover it. 00:36:56.112 [2024-12-15 05:37:09.649750] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.112 [2024-12-15 05:37:09.649803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.649816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.112 [2024-12-15 05:37:09.649822] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.112 [2024-12-15 05:37:09.649828] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.112 [2024-12-15 05:37:09.649843] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.112 qpair failed and we were unable to recover it. 00:36:56.112 [2024-12-15 05:37:09.659808] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.112 [2024-12-15 05:37:09.659862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.659875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.112 [2024-12-15 05:37:09.659884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.112 [2024-12-15 05:37:09.659890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.112 [2024-12-15 05:37:09.659904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.112 qpair failed and we were unable to recover it. 00:36:56.112 [2024-12-15 05:37:09.669810] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.112 [2024-12-15 05:37:09.669883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.669897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.112 [2024-12-15 05:37:09.669903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.112 [2024-12-15 05:37:09.669909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.112 [2024-12-15 05:37:09.669924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.112 qpair failed and we were unable to recover it. 00:36:56.112 [2024-12-15 05:37:09.679873] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.112 [2024-12-15 05:37:09.679929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.679942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.112 [2024-12-15 05:37:09.679948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.112 [2024-12-15 05:37:09.679954] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.112 [2024-12-15 05:37:09.679968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.112 qpair failed and we were unable to recover it. 00:36:56.112 [2024-12-15 05:37:09.689870] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.112 [2024-12-15 05:37:09.689923] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.689936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.112 [2024-12-15 05:37:09.689942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.112 [2024-12-15 05:37:09.689948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.112 [2024-12-15 05:37:09.689962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.112 qpair failed and we were unable to recover it. 00:36:56.112 [2024-12-15 05:37:09.699905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.112 [2024-12-15 05:37:09.699955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.699968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.112 [2024-12-15 05:37:09.699975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.112 [2024-12-15 05:37:09.699981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.112 [2024-12-15 05:37:09.700002] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.112 qpair failed and we were unable to recover it. 00:36:56.112 [2024-12-15 05:37:09.709935] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.112 [2024-12-15 05:37:09.709981] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.112 [2024-12-15 05:37:09.709997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.113 [2024-12-15 05:37:09.710003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.113 [2024-12-15 05:37:09.710010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.113 [2024-12-15 05:37:09.710024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.113 qpair failed and we were unable to recover it. 00:36:56.113 [2024-12-15 05:37:09.719986] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.113 [2024-12-15 05:37:09.720050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.113 [2024-12-15 05:37:09.720064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.113 [2024-12-15 05:37:09.720071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.113 [2024-12-15 05:37:09.720077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.113 [2024-12-15 05:37:09.720092] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.113 qpair failed and we were unable to recover it. 00:36:56.113 [2024-12-15 05:37:09.729987] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.113 [2024-12-15 05:37:09.730068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.113 [2024-12-15 05:37:09.730081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.113 [2024-12-15 05:37:09.730088] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.113 [2024-12-15 05:37:09.730093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.113 [2024-12-15 05:37:09.730108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.113 qpair failed and we were unable to recover it. 00:36:56.113 [2024-12-15 05:37:09.740006] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.113 [2024-12-15 05:37:09.740057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.113 [2024-12-15 05:37:09.740070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.113 [2024-12-15 05:37:09.740076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.113 [2024-12-15 05:37:09.740082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.113 [2024-12-15 05:37:09.740097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.113 qpair failed and we were unable to recover it. 00:36:56.113 [2024-12-15 05:37:09.750048] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.113 [2024-12-15 05:37:09.750132] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.113 [2024-12-15 05:37:09.750145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.113 [2024-12-15 05:37:09.750151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.113 [2024-12-15 05:37:09.750157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.113 [2024-12-15 05:37:09.750172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.113 qpair failed and we were unable to recover it. 00:36:56.113 [2024-12-15 05:37:09.760085] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.113 [2024-12-15 05:37:09.760154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.113 [2024-12-15 05:37:09.760167] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.113 [2024-12-15 05:37:09.760174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.113 [2024-12-15 05:37:09.760180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.113 [2024-12-15 05:37:09.760194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.113 qpair failed and we were unable to recover it. 00:36:56.113 [2024-12-15 05:37:09.770072] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.113 [2024-12-15 05:37:09.770127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.113 [2024-12-15 05:37:09.770140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.113 [2024-12-15 05:37:09.770146] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.113 [2024-12-15 05:37:09.770152] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.113 [2024-12-15 05:37:09.770167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.113 qpair failed and we were unable to recover it. 00:36:56.113 [2024-12-15 05:37:09.780113] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.113 [2024-12-15 05:37:09.780167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.113 [2024-12-15 05:37:09.780180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.113 [2024-12-15 05:37:09.780187] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.113 [2024-12-15 05:37:09.780193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.113 [2024-12-15 05:37:09.780208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.113 qpair failed and we were unable to recover it. 00:36:56.113 [2024-12-15 05:37:09.790102] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.113 [2024-12-15 05:37:09.790195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.113 [2024-12-15 05:37:09.790208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.113 [2024-12-15 05:37:09.790218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.113 [2024-12-15 05:37:09.790224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.113 [2024-12-15 05:37:09.790238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.113 qpair failed and we were unable to recover it. 00:36:56.374 [2024-12-15 05:37:09.800226] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.374 [2024-12-15 05:37:09.800285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.374 [2024-12-15 05:37:09.800299] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.374 [2024-12-15 05:37:09.800305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.374 [2024-12-15 05:37:09.800311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.374 [2024-12-15 05:37:09.800326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.374 qpair failed and we were unable to recover it. 00:36:56.374 [2024-12-15 05:37:09.810215] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.374 [2024-12-15 05:37:09.810266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.374 [2024-12-15 05:37:09.810279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.374 [2024-12-15 05:37:09.810285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.374 [2024-12-15 05:37:09.810291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.375 [2024-12-15 05:37:09.810305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.375 qpair failed and we were unable to recover it. 00:36:56.375 [2024-12-15 05:37:09.820239] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.375 [2024-12-15 05:37:09.820295] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.375 [2024-12-15 05:37:09.820310] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.375 [2024-12-15 05:37:09.820317] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.375 [2024-12-15 05:37:09.820323] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.375 [2024-12-15 05:37:09.820338] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.375 qpair failed and we were unable to recover it. 00:36:56.375 [2024-12-15 05:37:09.830262] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.375 [2024-12-15 05:37:09.830312] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.375 [2024-12-15 05:37:09.830325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.375 [2024-12-15 05:37:09.830331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.375 [2024-12-15 05:37:09.830337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.375 [2024-12-15 05:37:09.830354] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.375 qpair failed and we were unable to recover it. 00:36:56.375 [2024-12-15 05:37:09.840304] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.375 [2024-12-15 05:37:09.840361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.375 [2024-12-15 05:37:09.840374] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.375 [2024-12-15 05:37:09.840380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.375 [2024-12-15 05:37:09.840386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.375 [2024-12-15 05:37:09.840399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.375 qpair failed and we were unable to recover it. 00:36:56.375 [2024-12-15 05:37:09.850335] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.375 [2024-12-15 05:37:09.850394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.375 [2024-12-15 05:37:09.850407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.375 [2024-12-15 05:37:09.850414] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.375 [2024-12-15 05:37:09.850420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.375 [2024-12-15 05:37:09.850434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.375 qpair failed and we were unable to recover it. 00:36:56.375 [2024-12-15 05:37:09.860371] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.375 [2024-12-15 05:37:09.860426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.375 [2024-12-15 05:37:09.860438] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.375 [2024-12-15 05:37:09.860445] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.375 [2024-12-15 05:37:09.860451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.375 [2024-12-15 05:37:09.860465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.375 qpair failed and we were unable to recover it. 00:36:56.375 [2024-12-15 05:37:09.870387] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.375 [2024-12-15 05:37:09.870442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.375 [2024-12-15 05:37:09.870454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.375 [2024-12-15 05:37:09.870461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.375 [2024-12-15 05:37:09.870466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.375 [2024-12-15 05:37:09.870481] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.375 qpair failed and we were unable to recover it. 00:36:56.375 [2024-12-15 05:37:09.880471] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.375 [2024-12-15 05:37:09.880526] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.375 [2024-12-15 05:37:09.880539] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.375 [2024-12-15 05:37:09.880546] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.375 [2024-12-15 05:37:09.880552] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.375 [2024-12-15 05:37:09.880566] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.375 qpair failed and we were unable to recover it. 00:36:56.375 [2024-12-15 05:37:09.890386] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.375 [2024-12-15 05:37:09.890438] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.375 [2024-12-15 05:37:09.890451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.375 [2024-12-15 05:37:09.890458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.375 [2024-12-15 05:37:09.890464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.375 [2024-12-15 05:37:09.890478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.375 qpair failed and we were unable to recover it. 00:36:56.375 [2024-12-15 05:37:09.900476] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.375 [2024-12-15 05:37:09.900532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.375 [2024-12-15 05:37:09.900544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.375 [2024-12-15 05:37:09.900551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.375 [2024-12-15 05:37:09.900557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.375 [2024-12-15 05:37:09.900570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.375 qpair failed and we were unable to recover it. 00:36:56.375 [2024-12-15 05:37:09.910541] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.375 [2024-12-15 05:37:09.910594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.375 [2024-12-15 05:37:09.910607] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.375 [2024-12-15 05:37:09.910613] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.375 [2024-12-15 05:37:09.910619] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.375 [2024-12-15 05:37:09.910634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.375 qpair failed and we were unable to recover it. 00:36:56.375 [2024-12-15 05:37:09.920543] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.375 [2024-12-15 05:37:09.920603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.375 [2024-12-15 05:37:09.920618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.375 [2024-12-15 05:37:09.920625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.375 [2024-12-15 05:37:09.920631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.375 [2024-12-15 05:37:09.920645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.375 qpair failed and we were unable to recover it. 00:36:56.375 [2024-12-15 05:37:09.930599] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.375 [2024-12-15 05:37:09.930667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.375 [2024-12-15 05:37:09.930679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.375 [2024-12-15 05:37:09.930686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.375 [2024-12-15 05:37:09.930692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.375 [2024-12-15 05:37:09.930706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.375 qpair failed and we were unable to recover it. 00:36:56.375 [2024-12-15 05:37:09.940620] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.375 [2024-12-15 05:37:09.940686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.375 [2024-12-15 05:37:09.940698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.376 [2024-12-15 05:37:09.940705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.376 [2024-12-15 05:37:09.940710] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.376 [2024-12-15 05:37:09.940724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.376 qpair failed and we were unable to recover it. 00:36:56.376 [2024-12-15 05:37:09.950645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.376 [2024-12-15 05:37:09.950697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.376 [2024-12-15 05:37:09.950710] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.376 [2024-12-15 05:37:09.950716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.376 [2024-12-15 05:37:09.950722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.376 [2024-12-15 05:37:09.950737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.376 qpair failed and we were unable to recover it. 00:36:56.376 [2024-12-15 05:37:09.960657] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.376 [2024-12-15 05:37:09.960708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.376 [2024-12-15 05:37:09.960720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.376 [2024-12-15 05:37:09.960726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.376 [2024-12-15 05:37:09.960736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.376 [2024-12-15 05:37:09.960750] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.376 qpair failed and we were unable to recover it. 00:36:56.376 [2024-12-15 05:37:09.970685] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.376 [2024-12-15 05:37:09.970736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.376 [2024-12-15 05:37:09.970751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.376 [2024-12-15 05:37:09.970758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.376 [2024-12-15 05:37:09.970764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.376 [2024-12-15 05:37:09.970779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.376 qpair failed and we were unable to recover it. 00:36:56.376 [2024-12-15 05:37:09.980701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.376 [2024-12-15 05:37:09.980754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.376 [2024-12-15 05:37:09.980767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.376 [2024-12-15 05:37:09.980774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.376 [2024-12-15 05:37:09.980780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.376 [2024-12-15 05:37:09.980794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.376 qpair failed and we were unable to recover it. 00:36:56.376 [2024-12-15 05:37:09.990701] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.376 [2024-12-15 05:37:09.990798] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.376 [2024-12-15 05:37:09.990811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.376 [2024-12-15 05:37:09.990817] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.376 [2024-12-15 05:37:09.990823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.376 [2024-12-15 05:37:09.990837] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.376 qpair failed and we were unable to recover it. 00:36:56.376 [2024-12-15 05:37:10.000696] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.376 [2024-12-15 05:37:10.000787] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.376 [2024-12-15 05:37:10.000800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.376 [2024-12-15 05:37:10.000806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.376 [2024-12-15 05:37:10.000812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.376 [2024-12-15 05:37:10.000826] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.376 qpair failed and we were unable to recover it. 00:36:56.376 [2024-12-15 05:37:10.010860] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.376 [2024-12-15 05:37:10.010932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.376 [2024-12-15 05:37:10.010949] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.376 [2024-12-15 05:37:10.010957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.376 [2024-12-15 05:37:10.010964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.376 [2024-12-15 05:37:10.010982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.376 qpair failed and we were unable to recover it. 00:36:56.376 [2024-12-15 05:37:10.020877] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.376 [2024-12-15 05:37:10.020937] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.376 [2024-12-15 05:37:10.020950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.376 [2024-12-15 05:37:10.020957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.376 [2024-12-15 05:37:10.020963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.376 [2024-12-15 05:37:10.020978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.376 qpair failed and we were unable to recover it. 00:36:56.376 [2024-12-15 05:37:10.030907] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.376 [2024-12-15 05:37:10.030998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.376 [2024-12-15 05:37:10.031019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.376 [2024-12-15 05:37:10.031027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.376 [2024-12-15 05:37:10.031035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.376 [2024-12-15 05:37:10.031054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.376 qpair failed and we were unable to recover it. 00:36:56.376 [2024-12-15 05:37:10.040905] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.376 [2024-12-15 05:37:10.040965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.376 [2024-12-15 05:37:10.040979] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.376 [2024-12-15 05:37:10.040986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.376 [2024-12-15 05:37:10.040995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.376 [2024-12-15 05:37:10.041011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.376 qpair failed and we were unable to recover it. 00:36:56.376 [2024-12-15 05:37:10.050928] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.376 [2024-12-15 05:37:10.050985] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.376 [2024-12-15 05:37:10.051007] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.376 [2024-12-15 05:37:10.051014] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.376 [2024-12-15 05:37:10.051020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.376 [2024-12-15 05:37:10.051036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.376 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.060959] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.061022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.061036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.061043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.061049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.061064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.071023] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.071084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.071104] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.071118] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.071128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.071151] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.081021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.081133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.081154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.081164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.081173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.081196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.090985] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.091046] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.091065] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.091073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.091086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.091107] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.101010] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.101095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.101116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.101128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.101138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.101161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.111122] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.111202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.111224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.111236] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.111246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.111272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.121118] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.121177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.121194] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.121201] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.121207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.121223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.131097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.131182] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.131197] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.131204] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.131210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.131226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.141167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.141220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.141236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.141242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.141249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.141265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.151189] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.151245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.151260] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.151268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.151273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.151289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.161167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.161239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.161254] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.161261] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.161267] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.161282] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.171182] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.171239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.171255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.171262] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.171268] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.171283] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.181265] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.181322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.181338] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.181345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.181351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.181366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.191319] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.191376] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.191392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.191399] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.191405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.191421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.201356] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.201412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.201428] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.201434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.201440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.201456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.211344] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.211420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.211436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.211442] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.211448] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.211465] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.221394] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.221444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.221460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.221474] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.221480] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.221496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.231445] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.231501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.231516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.231523] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.231529] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.231545] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.241396] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.241450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.241466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.241472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.241479] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.241494] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.251407] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.251464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.251480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.251486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.251492] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.251508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.261437] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.637 [2024-12-15 05:37:10.261497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.637 [2024-12-15 05:37:10.261513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.637 [2024-12-15 05:37:10.261520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.637 [2024-12-15 05:37:10.261525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.637 [2024-12-15 05:37:10.261544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.637 qpair failed and we were unable to recover it. 00:36:56.637 [2024-12-15 05:37:10.271532] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.638 [2024-12-15 05:37:10.271586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.638 [2024-12-15 05:37:10.271601] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.638 [2024-12-15 05:37:10.271607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.638 [2024-12-15 05:37:10.271614] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.638 [2024-12-15 05:37:10.271629] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.638 qpair failed and we were unable to recover it. 00:36:56.638 [2024-12-15 05:37:10.281552] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.638 [2024-12-15 05:37:10.281608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.638 [2024-12-15 05:37:10.281622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.638 [2024-12-15 05:37:10.281631] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.638 [2024-12-15 05:37:10.281640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.638 [2024-12-15 05:37:10.281661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.638 qpair failed and we were unable to recover it. 00:36:56.638 [2024-12-15 05:37:10.291531] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.638 [2024-12-15 05:37:10.291584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.638 [2024-12-15 05:37:10.291599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.638 [2024-12-15 05:37:10.291606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.638 [2024-12-15 05:37:10.291612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.638 [2024-12-15 05:37:10.291627] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.638 qpair failed and we were unable to recover it. 00:36:56.638 [2024-12-15 05:37:10.301632] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.638 [2024-12-15 05:37:10.301715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.638 [2024-12-15 05:37:10.301731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.638 [2024-12-15 05:37:10.301737] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.638 [2024-12-15 05:37:10.301744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.638 [2024-12-15 05:37:10.301758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.638 qpair failed and we were unable to recover it. 00:36:56.638 [2024-12-15 05:37:10.311571] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.638 [2024-12-15 05:37:10.311629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.638 [2024-12-15 05:37:10.311644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.638 [2024-12-15 05:37:10.311651] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.638 [2024-12-15 05:37:10.311657] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.638 [2024-12-15 05:37:10.311672] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.638 qpair failed and we were unable to recover it. 00:36:56.638 [2024-12-15 05:37:10.321716] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.638 [2024-12-15 05:37:10.321784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.638 [2024-12-15 05:37:10.321799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.638 [2024-12-15 05:37:10.321806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.638 [2024-12-15 05:37:10.321812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.638 [2024-12-15 05:37:10.321828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.638 qpair failed and we were unable to recover it. 00:36:56.898 [2024-12-15 05:37:10.331726] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.898 [2024-12-15 05:37:10.331812] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.898 [2024-12-15 05:37:10.331828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.898 [2024-12-15 05:37:10.331836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.898 [2024-12-15 05:37:10.331842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.898 [2024-12-15 05:37:10.331858] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.898 qpair failed and we were unable to recover it. 00:36:56.898 [2024-12-15 05:37:10.341707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.898 [2024-12-15 05:37:10.341761] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.898 [2024-12-15 05:37:10.341776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.898 [2024-12-15 05:37:10.341783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.898 [2024-12-15 05:37:10.341789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.898 [2024-12-15 05:37:10.341805] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.898 qpair failed and we were unable to recover it. 00:36:56.898 [2024-12-15 05:37:10.351730] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.898 [2024-12-15 05:37:10.351805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.898 [2024-12-15 05:37:10.351821] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.898 [2024-12-15 05:37:10.351830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.898 [2024-12-15 05:37:10.351837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.898 [2024-12-15 05:37:10.351852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.898 qpair failed and we were unable to recover it. 00:36:56.898 [2024-12-15 05:37:10.361728] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.898 [2024-12-15 05:37:10.361820] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.898 [2024-12-15 05:37:10.361839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.898 [2024-12-15 05:37:10.361848] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.898 [2024-12-15 05:37:10.361856] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.898 [2024-12-15 05:37:10.361874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.898 qpair failed and we were unable to recover it. 00:36:56.898 [2024-12-15 05:37:10.371752] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.898 [2024-12-15 05:37:10.371802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.898 [2024-12-15 05:37:10.371817] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.898 [2024-12-15 05:37:10.371824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.898 [2024-12-15 05:37:10.371830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.898 [2024-12-15 05:37:10.371847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.898 qpair failed and we were unable to recover it. 00:36:56.898 [2024-12-15 05:37:10.381806] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.898 [2024-12-15 05:37:10.381895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.898 [2024-12-15 05:37:10.381910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.898 [2024-12-15 05:37:10.381917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.898 [2024-12-15 05:37:10.381922] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.898 [2024-12-15 05:37:10.381938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.898 qpair failed and we were unable to recover it. 00:36:56.898 [2024-12-15 05:37:10.391817] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.898 [2024-12-15 05:37:10.391884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.898 [2024-12-15 05:37:10.391900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.898 [2024-12-15 05:37:10.391907] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.898 [2024-12-15 05:37:10.391913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.898 [2024-12-15 05:37:10.391932] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.898 qpair failed and we were unable to recover it. 00:36:56.898 [2024-12-15 05:37:10.401833] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.898 [2024-12-15 05:37:10.401889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.898 [2024-12-15 05:37:10.401905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.898 [2024-12-15 05:37:10.401912] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.899 [2024-12-15 05:37:10.401918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.899 [2024-12-15 05:37:10.401934] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.899 qpair failed and we were unable to recover it. 00:36:56.899 [2024-12-15 05:37:10.411847] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.899 [2024-12-15 05:37:10.411939] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.899 [2024-12-15 05:37:10.411954] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.899 [2024-12-15 05:37:10.411961] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.899 [2024-12-15 05:37:10.411967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.899 [2024-12-15 05:37:10.411983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.899 qpair failed and we were unable to recover it. 00:36:56.899 [2024-12-15 05:37:10.421872] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.899 [2024-12-15 05:37:10.421927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.899 [2024-12-15 05:37:10.421942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.899 [2024-12-15 05:37:10.421950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.899 [2024-12-15 05:37:10.421956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.899 [2024-12-15 05:37:10.421972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.899 qpair failed and we were unable to recover it. 00:36:56.899 [2024-12-15 05:37:10.432021] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.899 [2024-12-15 05:37:10.432086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.899 [2024-12-15 05:37:10.432101] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.899 [2024-12-15 05:37:10.432108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.899 [2024-12-15 05:37:10.432114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.899 [2024-12-15 05:37:10.432129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.899 qpair failed and we were unable to recover it. 00:36:56.899 [2024-12-15 05:37:10.442049] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.899 [2024-12-15 05:37:10.442156] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.899 [2024-12-15 05:37:10.442172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.899 [2024-12-15 05:37:10.442179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.899 [2024-12-15 05:37:10.442185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.899 [2024-12-15 05:37:10.442201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.899 qpair failed and we were unable to recover it. 00:36:56.899 [2024-12-15 05:37:10.452053] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.899 [2024-12-15 05:37:10.452112] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.899 [2024-12-15 05:37:10.452128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.899 [2024-12-15 05:37:10.452135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.899 [2024-12-15 05:37:10.452141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.899 [2024-12-15 05:37:10.452156] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.899 qpair failed and we were unable to recover it. 00:36:56.899 [2024-12-15 05:37:10.462035] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.899 [2024-12-15 05:37:10.462094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.899 [2024-12-15 05:37:10.462109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.899 [2024-12-15 05:37:10.462115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.899 [2024-12-15 05:37:10.462121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.899 [2024-12-15 05:37:10.462136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.899 qpair failed and we were unable to recover it. 00:36:56.899 [2024-12-15 05:37:10.472152] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.899 [2024-12-15 05:37:10.472209] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.899 [2024-12-15 05:37:10.472224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.899 [2024-12-15 05:37:10.472231] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.899 [2024-12-15 05:37:10.472237] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.899 [2024-12-15 05:37:10.472253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.899 qpair failed and we were unable to recover it. 00:36:56.899 [2024-12-15 05:37:10.482132] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.899 [2024-12-15 05:37:10.482190] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.899 [2024-12-15 05:37:10.482209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.899 [2024-12-15 05:37:10.482216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.899 [2024-12-15 05:37:10.482222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.899 [2024-12-15 05:37:10.482238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.899 qpair failed and we were unable to recover it. 00:36:56.899 [2024-12-15 05:37:10.492219] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.899 [2024-12-15 05:37:10.492284] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.899 [2024-12-15 05:37:10.492300] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.899 [2024-12-15 05:37:10.492306] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.899 [2024-12-15 05:37:10.492313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.899 [2024-12-15 05:37:10.492329] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.899 qpair failed and we were unable to recover it. 00:36:56.899 [2024-12-15 05:37:10.502187] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.899 [2024-12-15 05:37:10.502277] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.899 [2024-12-15 05:37:10.502292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.899 [2024-12-15 05:37:10.502299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.899 [2024-12-15 05:37:10.502305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.899 [2024-12-15 05:37:10.502320] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.899 qpair failed and we were unable to recover it. 00:36:56.899 [2024-12-15 05:37:10.512180] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.899 [2024-12-15 05:37:10.512235] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.899 [2024-12-15 05:37:10.512251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.899 [2024-12-15 05:37:10.512258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.899 [2024-12-15 05:37:10.512264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.899 [2024-12-15 05:37:10.512280] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.899 qpair failed and we were unable to recover it. 00:36:56.899 [2024-12-15 05:37:10.522253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.899 [2024-12-15 05:37:10.522307] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.899 [2024-12-15 05:37:10.522322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.899 [2024-12-15 05:37:10.522329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.899 [2024-12-15 05:37:10.522338] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.899 [2024-12-15 05:37:10.522353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.899 qpair failed and we were unable to recover it. 00:36:56.899 [2024-12-15 05:37:10.532273] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.900 [2024-12-15 05:37:10.532328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.900 [2024-12-15 05:37:10.532343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.900 [2024-12-15 05:37:10.532350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.900 [2024-12-15 05:37:10.532356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.900 [2024-12-15 05:37:10.532372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.900 qpair failed and we were unable to recover it. 00:36:56.900 [2024-12-15 05:37:10.542297] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.900 [2024-12-15 05:37:10.542351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.900 [2024-12-15 05:37:10.542366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.900 [2024-12-15 05:37:10.542373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.900 [2024-12-15 05:37:10.542379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.900 [2024-12-15 05:37:10.542394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.900 qpair failed and we were unable to recover it. 00:36:56.900 [2024-12-15 05:37:10.552339] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.900 [2024-12-15 05:37:10.552394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.900 [2024-12-15 05:37:10.552409] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.900 [2024-12-15 05:37:10.552416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.900 [2024-12-15 05:37:10.552422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.900 [2024-12-15 05:37:10.552438] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.900 qpair failed and we were unable to recover it. 00:36:56.900 [2024-12-15 05:37:10.562333] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.900 [2024-12-15 05:37:10.562406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.900 [2024-12-15 05:37:10.562421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.900 [2024-12-15 05:37:10.562427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.900 [2024-12-15 05:37:10.562433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.900 [2024-12-15 05:37:10.562450] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.900 qpair failed and we were unable to recover it. 00:36:56.900 [2024-12-15 05:37:10.572338] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.900 [2024-12-15 05:37:10.572389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.900 [2024-12-15 05:37:10.572404] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.900 [2024-12-15 05:37:10.572412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.900 [2024-12-15 05:37:10.572418] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.900 [2024-12-15 05:37:10.572434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.900 qpair failed and we were unable to recover it. 00:36:56.900 [2024-12-15 05:37:10.582399] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:56.900 [2024-12-15 05:37:10.582454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:56.900 [2024-12-15 05:37:10.582468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:56.900 [2024-12-15 05:37:10.582475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:56.900 [2024-12-15 05:37:10.582481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:56.900 [2024-12-15 05:37:10.582496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:56.900 qpair failed and we were unable to recover it. 00:36:57.161 [2024-12-15 05:37:10.592435] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.161 [2024-12-15 05:37:10.592493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.161 [2024-12-15 05:37:10.592508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.161 [2024-12-15 05:37:10.592516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.161 [2024-12-15 05:37:10.592522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.161 [2024-12-15 05:37:10.592537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.161 qpair failed and we were unable to recover it. 00:36:57.161 [2024-12-15 05:37:10.602481] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.161 [2024-12-15 05:37:10.602539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.161 [2024-12-15 05:37:10.602555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.161 [2024-12-15 05:37:10.602562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.161 [2024-12-15 05:37:10.602568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.161 [2024-12-15 05:37:10.602584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.161 qpair failed and we were unable to recover it. 00:36:57.161 [2024-12-15 05:37:10.612491] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.161 [2024-12-15 05:37:10.612549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.161 [2024-12-15 05:37:10.612568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.161 [2024-12-15 05:37:10.612575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.161 [2024-12-15 05:37:10.612581] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.161 [2024-12-15 05:37:10.612597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.161 qpair failed and we were unable to recover it. 00:36:57.161 [2024-12-15 05:37:10.622517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.161 [2024-12-15 05:37:10.622573] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.161 [2024-12-15 05:37:10.622588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.161 [2024-12-15 05:37:10.622595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.161 [2024-12-15 05:37:10.622601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.161 [2024-12-15 05:37:10.622616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.161 qpair failed and we were unable to recover it. 00:36:57.161 [2024-12-15 05:37:10.632567] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.161 [2024-12-15 05:37:10.632653] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.161 [2024-12-15 05:37:10.632668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.161 [2024-12-15 05:37:10.632675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.161 [2024-12-15 05:37:10.632681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.161 [2024-12-15 05:37:10.632697] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.161 qpair failed and we were unable to recover it. 00:36:57.161 [2024-12-15 05:37:10.642629] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.161 [2024-12-15 05:37:10.642684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.161 [2024-12-15 05:37:10.642699] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.161 [2024-12-15 05:37:10.642705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.161 [2024-12-15 05:37:10.642711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.161 [2024-12-15 05:37:10.642726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.161 qpair failed and we were unable to recover it. 00:36:57.161 [2024-12-15 05:37:10.652652] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.161 [2024-12-15 05:37:10.652715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.161 [2024-12-15 05:37:10.652734] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.161 [2024-12-15 05:37:10.652744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.161 [2024-12-15 05:37:10.652754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.161 [2024-12-15 05:37:10.652771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.161 qpair failed and we were unable to recover it. 00:36:57.161 [2024-12-15 05:37:10.662655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.161 [2024-12-15 05:37:10.662736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.161 [2024-12-15 05:37:10.662751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.161 [2024-12-15 05:37:10.662757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.161 [2024-12-15 05:37:10.662763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.162 [2024-12-15 05:37:10.662778] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.162 qpair failed and we were unable to recover it. 00:36:57.162 [2024-12-15 05:37:10.672594] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.162 [2024-12-15 05:37:10.672650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.162 [2024-12-15 05:37:10.672665] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.162 [2024-12-15 05:37:10.672671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.162 [2024-12-15 05:37:10.672677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.162 [2024-12-15 05:37:10.672692] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.162 qpair failed and we were unable to recover it. 00:36:57.162 [2024-12-15 05:37:10.682720] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.162 [2024-12-15 05:37:10.682817] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.162 [2024-12-15 05:37:10.682831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.162 [2024-12-15 05:37:10.682837] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.162 [2024-12-15 05:37:10.682843] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.162 [2024-12-15 05:37:10.682859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.162 qpair failed and we were unable to recover it. 00:36:57.162 [2024-12-15 05:37:10.692734] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.162 [2024-12-15 05:37:10.692793] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.162 [2024-12-15 05:37:10.692809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.162 [2024-12-15 05:37:10.692816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.162 [2024-12-15 05:37:10.692822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.162 [2024-12-15 05:37:10.692838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.162 qpair failed and we were unable to recover it. 00:36:57.162 [2024-12-15 05:37:10.702751] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.162 [2024-12-15 05:37:10.702822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.162 [2024-12-15 05:37:10.702838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.162 [2024-12-15 05:37:10.702844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.162 [2024-12-15 05:37:10.702850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.162 [2024-12-15 05:37:10.702866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.162 qpair failed and we were unable to recover it. 00:36:57.162 [2024-12-15 05:37:10.712826] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.162 [2024-12-15 05:37:10.712883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.162 [2024-12-15 05:37:10.712898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.162 [2024-12-15 05:37:10.712905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.162 [2024-12-15 05:37:10.712911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.162 [2024-12-15 05:37:10.712926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.162 qpair failed and we were unable to recover it. 00:36:57.162 [2024-12-15 05:37:10.722809] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.162 [2024-12-15 05:37:10.722897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.162 [2024-12-15 05:37:10.722913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.162 [2024-12-15 05:37:10.722920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.162 [2024-12-15 05:37:10.722926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.162 [2024-12-15 05:37:10.722942] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.162 qpair failed and we were unable to recover it. 00:36:57.162 [2024-12-15 05:37:10.732890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.162 [2024-12-15 05:37:10.732946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.162 [2024-12-15 05:37:10.732961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.162 [2024-12-15 05:37:10.732968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.162 [2024-12-15 05:37:10.732975] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.162 [2024-12-15 05:37:10.732997] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.162 qpair failed and we were unable to recover it. 00:36:57.162 [2024-12-15 05:37:10.742890] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.162 [2024-12-15 05:37:10.742946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.162 [2024-12-15 05:37:10.742961] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.162 [2024-12-15 05:37:10.742968] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.162 [2024-12-15 05:37:10.742974] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.162 [2024-12-15 05:37:10.742989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.162 qpair failed and we were unable to recover it. 00:36:57.162 [2024-12-15 05:37:10.752919] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.162 [2024-12-15 05:37:10.752978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.162 [2024-12-15 05:37:10.752996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.162 [2024-12-15 05:37:10.753003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.162 [2024-12-15 05:37:10.753009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.162 [2024-12-15 05:37:10.753024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.162 qpair failed and we were unable to recover it. 00:36:57.162 [2024-12-15 05:37:10.762950] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.162 [2024-12-15 05:37:10.763009] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.162 [2024-12-15 05:37:10.763023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.162 [2024-12-15 05:37:10.763029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.162 [2024-12-15 05:37:10.763035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.162 [2024-12-15 05:37:10.763050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.162 qpair failed and we were unable to recover it. 00:36:57.162 [2024-12-15 05:37:10.772964] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.162 [2024-12-15 05:37:10.773021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.162 [2024-12-15 05:37:10.773034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.162 [2024-12-15 05:37:10.773041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.162 [2024-12-15 05:37:10.773047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.162 [2024-12-15 05:37:10.773062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.162 qpair failed and we were unable to recover it. 00:36:57.162 [2024-12-15 05:37:10.783032] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.162 [2024-12-15 05:37:10.783092] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.162 [2024-12-15 05:37:10.783106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.162 [2024-12-15 05:37:10.783115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.162 [2024-12-15 05:37:10.783121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.162 [2024-12-15 05:37:10.783136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.162 qpair failed and we were unable to recover it. 00:36:57.162 [2024-12-15 05:37:10.793013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.162 [2024-12-15 05:37:10.793068] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.163 [2024-12-15 05:37:10.793081] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.163 [2024-12-15 05:37:10.793087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.163 [2024-12-15 05:37:10.793093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.163 [2024-12-15 05:37:10.793108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.163 qpair failed and we were unable to recover it. 00:36:57.163 [2024-12-15 05:37:10.803131] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.163 [2024-12-15 05:37:10.803210] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.163 [2024-12-15 05:37:10.803223] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.163 [2024-12-15 05:37:10.803229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.163 [2024-12-15 05:37:10.803235] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.163 [2024-12-15 05:37:10.803250] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.163 qpair failed and we were unable to recover it. 00:36:57.163 [2024-12-15 05:37:10.813081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.163 [2024-12-15 05:37:10.813135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.163 [2024-12-15 05:37:10.813148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.163 [2024-12-15 05:37:10.813155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.163 [2024-12-15 05:37:10.813161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.163 [2024-12-15 05:37:10.813175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.163 qpair failed and we were unable to recover it. 00:36:57.163 [2024-12-15 05:37:10.823097] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.163 [2024-12-15 05:37:10.823179] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.163 [2024-12-15 05:37:10.823192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.163 [2024-12-15 05:37:10.823198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.163 [2024-12-15 05:37:10.823204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.163 [2024-12-15 05:37:10.823222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.163 qpair failed and we were unable to recover it. 00:36:57.163 [2024-12-15 05:37:10.833150] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.163 [2024-12-15 05:37:10.833203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.163 [2024-12-15 05:37:10.833216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.163 [2024-12-15 05:37:10.833222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.163 [2024-12-15 05:37:10.833229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.163 [2024-12-15 05:37:10.833244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.163 qpair failed and we were unable to recover it. 00:36:57.163 [2024-12-15 05:37:10.843178] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.163 [2024-12-15 05:37:10.843233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.163 [2024-12-15 05:37:10.843246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.163 [2024-12-15 05:37:10.843253] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.163 [2024-12-15 05:37:10.843259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.163 [2024-12-15 05:37:10.843274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.163 qpair failed and we were unable to recover it. 00:36:57.424 [2024-12-15 05:37:10.853194] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.424 [2024-12-15 05:37:10.853245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.424 [2024-12-15 05:37:10.853258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.424 [2024-12-15 05:37:10.853265] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.424 [2024-12-15 05:37:10.853272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.424 [2024-12-15 05:37:10.853287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.424 qpair failed and we were unable to recover it. 00:36:57.424 [2024-12-15 05:37:10.863223] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.424 [2024-12-15 05:37:10.863270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.424 [2024-12-15 05:37:10.863283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.424 [2024-12-15 05:37:10.863290] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.424 [2024-12-15 05:37:10.863296] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.425 [2024-12-15 05:37:10.863311] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.425 qpair failed and we were unable to recover it. 00:36:57.425 [2024-12-15 05:37:10.873283] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.425 [2024-12-15 05:37:10.873344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.425 [2024-12-15 05:37:10.873358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.425 [2024-12-15 05:37:10.873364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.425 [2024-12-15 05:37:10.873370] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.425 [2024-12-15 05:37:10.873385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.425 qpair failed and we were unable to recover it. 00:36:57.425 [2024-12-15 05:37:10.883294] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.425 [2024-12-15 05:37:10.883353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.425 [2024-12-15 05:37:10.883366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.425 [2024-12-15 05:37:10.883373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.425 [2024-12-15 05:37:10.883379] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.425 [2024-12-15 05:37:10.883394] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.425 qpair failed and we were unable to recover it. 00:36:57.425 [2024-12-15 05:37:10.893298] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.425 [2024-12-15 05:37:10.893353] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.425 [2024-12-15 05:37:10.893366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.425 [2024-12-15 05:37:10.893372] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.425 [2024-12-15 05:37:10.893378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.425 [2024-12-15 05:37:10.893393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.425 qpair failed and we were unable to recover it. 00:36:57.425 [2024-12-15 05:37:10.903253] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.425 [2024-12-15 05:37:10.903309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.425 [2024-12-15 05:37:10.903322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.425 [2024-12-15 05:37:10.903329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.425 [2024-12-15 05:37:10.903335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.425 [2024-12-15 05:37:10.903349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.425 qpair failed and we were unable to recover it. 00:36:57.425 [2024-12-15 05:37:10.913354] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.425 [2024-12-15 05:37:10.913406] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.425 [2024-12-15 05:37:10.913422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.425 [2024-12-15 05:37:10.913428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.425 [2024-12-15 05:37:10.913434] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.425 [2024-12-15 05:37:10.913449] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.425 qpair failed and we were unable to recover it. 00:36:57.425 [2024-12-15 05:37:10.923402] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.425 [2024-12-15 05:37:10.923477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.425 [2024-12-15 05:37:10.923489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.425 [2024-12-15 05:37:10.923495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.425 [2024-12-15 05:37:10.923501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.425 [2024-12-15 05:37:10.923516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.425 qpair failed and we were unable to recover it. 00:36:57.425 [2024-12-15 05:37:10.933438] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.425 [2024-12-15 05:37:10.933489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.425 [2024-12-15 05:37:10.933502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.425 [2024-12-15 05:37:10.933508] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.425 [2024-12-15 05:37:10.933514] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.425 [2024-12-15 05:37:10.933528] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.425 qpair failed and we were unable to recover it. 00:36:57.425 [2024-12-15 05:37:10.943436] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.425 [2024-12-15 05:37:10.943497] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.425 [2024-12-15 05:37:10.943509] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.425 [2024-12-15 05:37:10.943516] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.425 [2024-12-15 05:37:10.943522] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.425 [2024-12-15 05:37:10.943536] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.425 qpair failed and we were unable to recover it. 00:36:57.425 [2024-12-15 05:37:10.953475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.425 [2024-12-15 05:37:10.953564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.425 [2024-12-15 05:37:10.953577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.425 [2024-12-15 05:37:10.953583] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.425 [2024-12-15 05:37:10.953589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.425 [2024-12-15 05:37:10.953607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.425 qpair failed and we were unable to recover it. 00:36:57.425 [2024-12-15 05:37:10.963517] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.425 [2024-12-15 05:37:10.963571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.425 [2024-12-15 05:37:10.963584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.425 [2024-12-15 05:37:10.963590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.425 [2024-12-15 05:37:10.963596] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.425 [2024-12-15 05:37:10.963610] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.425 qpair failed and we were unable to recover it. 00:36:57.425 [2024-12-15 05:37:10.973537] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.425 [2024-12-15 05:37:10.973589] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.425 [2024-12-15 05:37:10.973602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.425 [2024-12-15 05:37:10.973609] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.425 [2024-12-15 05:37:10.973615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.425 [2024-12-15 05:37:10.973630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.425 qpair failed and we were unable to recover it. 00:36:57.425 [2024-12-15 05:37:10.983542] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.425 [2024-12-15 05:37:10.983596] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.425 [2024-12-15 05:37:10.983609] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.425 [2024-12-15 05:37:10.983615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.425 [2024-12-15 05:37:10.983621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.425 [2024-12-15 05:37:10.983635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.425 qpair failed and we were unable to recover it. 00:36:57.425 [2024-12-15 05:37:10.993634] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.425 [2024-12-15 05:37:10.993737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.426 [2024-12-15 05:37:10.993751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.426 [2024-12-15 05:37:10.993757] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.426 [2024-12-15 05:37:10.993763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.426 [2024-12-15 05:37:10.993777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.426 qpair failed and we were unable to recover it. 00:36:57.426 [2024-12-15 05:37:11.003645] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.426 [2024-12-15 05:37:11.003706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.426 [2024-12-15 05:37:11.003719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.426 [2024-12-15 05:37:11.003726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.426 [2024-12-15 05:37:11.003732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.426 [2024-12-15 05:37:11.003745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.426 qpair failed and we were unable to recover it. 00:36:57.426 [2024-12-15 05:37:11.013630] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.426 [2024-12-15 05:37:11.013689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.426 [2024-12-15 05:37:11.013702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.426 [2024-12-15 05:37:11.013708] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.426 [2024-12-15 05:37:11.013715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.426 [2024-12-15 05:37:11.013729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.426 qpair failed and we were unable to recover it. 00:36:57.426 [2024-12-15 05:37:11.023642] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.426 [2024-12-15 05:37:11.023706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.426 [2024-12-15 05:37:11.023719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.426 [2024-12-15 05:37:11.023725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.426 [2024-12-15 05:37:11.023732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.426 [2024-12-15 05:37:11.023746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.426 qpair failed and we were unable to recover it. 00:36:57.426 [2024-12-15 05:37:11.033718] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.426 [2024-12-15 05:37:11.033772] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.426 [2024-12-15 05:37:11.033785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.426 [2024-12-15 05:37:11.033791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.426 [2024-12-15 05:37:11.033797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.426 [2024-12-15 05:37:11.033811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.426 qpair failed and we were unable to recover it. 00:36:57.426 [2024-12-15 05:37:11.043709] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.426 [2024-12-15 05:37:11.043764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.426 [2024-12-15 05:37:11.043780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.426 [2024-12-15 05:37:11.043787] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.426 [2024-12-15 05:37:11.043793] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.426 [2024-12-15 05:37:11.043806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.426 qpair failed and we were unable to recover it. 00:36:57.426 [2024-12-15 05:37:11.053791] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.426 [2024-12-15 05:37:11.053858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.426 [2024-12-15 05:37:11.053871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.426 [2024-12-15 05:37:11.053877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.426 [2024-12-15 05:37:11.053883] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.426 [2024-12-15 05:37:11.053896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.426 qpair failed and we were unable to recover it. 00:36:57.426 [2024-12-15 05:37:11.063796] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.426 [2024-12-15 05:37:11.063857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.426 [2024-12-15 05:37:11.063870] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.426 [2024-12-15 05:37:11.063877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.426 [2024-12-15 05:37:11.063882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.426 [2024-12-15 05:37:11.063897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.426 qpair failed and we were unable to recover it. 00:36:57.426 [2024-12-15 05:37:11.073804] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.426 [2024-12-15 05:37:11.073859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.426 [2024-12-15 05:37:11.073872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.426 [2024-12-15 05:37:11.073878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.426 [2024-12-15 05:37:11.073884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.426 [2024-12-15 05:37:11.073898] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.426 qpair failed and we were unable to recover it. 00:36:57.426 [2024-12-15 05:37:11.083840] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.426 [2024-12-15 05:37:11.083917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.426 [2024-12-15 05:37:11.083930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.426 [2024-12-15 05:37:11.083936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.426 [2024-12-15 05:37:11.083945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.426 [2024-12-15 05:37:11.083959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.426 qpair failed and we were unable to recover it. 00:36:57.426 [2024-12-15 05:37:11.093864] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.426 [2024-12-15 05:37:11.093918] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.426 [2024-12-15 05:37:11.093931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.426 [2024-12-15 05:37:11.093937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.426 [2024-12-15 05:37:11.093943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.426 [2024-12-15 05:37:11.093957] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.426 qpair failed and we were unable to recover it. 00:36:57.426 [2024-12-15 05:37:11.103878] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.426 [2024-12-15 05:37:11.103958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.426 [2024-12-15 05:37:11.103971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.426 [2024-12-15 05:37:11.103977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.426 [2024-12-15 05:37:11.103983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.426 [2024-12-15 05:37:11.104000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.426 qpair failed and we were unable to recover it. 00:36:57.688 [2024-12-15 05:37:11.113909] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.688 [2024-12-15 05:37:11.113961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.688 [2024-12-15 05:37:11.113974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.688 [2024-12-15 05:37:11.113981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.688 [2024-12-15 05:37:11.113987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.688 [2024-12-15 05:37:11.114005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.688 qpair failed and we were unable to recover it. 00:36:57.688 [2024-12-15 05:37:11.123965] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.688 [2024-12-15 05:37:11.124028] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.688 [2024-12-15 05:37:11.124041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.688 [2024-12-15 05:37:11.124047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.688 [2024-12-15 05:37:11.124053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.688 [2024-12-15 05:37:11.124067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.688 qpair failed and we were unable to recover it. 00:36:57.688 [2024-12-15 05:37:11.133976] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.688 [2024-12-15 05:37:11.134035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.688 [2024-12-15 05:37:11.134049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.688 [2024-12-15 05:37:11.134055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.688 [2024-12-15 05:37:11.134061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.688 [2024-12-15 05:37:11.134076] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.688 qpair failed and we were unable to recover it. 00:36:57.688 [2024-12-15 05:37:11.144013] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.688 [2024-12-15 05:37:11.144065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.688 [2024-12-15 05:37:11.144078] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.688 [2024-12-15 05:37:11.144085] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.688 [2024-12-15 05:37:11.144091] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.688 [2024-12-15 05:37:11.144106] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.688 qpair failed and we were unable to recover it. 00:36:57.688 [2024-12-15 05:37:11.154042] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.688 [2024-12-15 05:37:11.154142] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.688 [2024-12-15 05:37:11.154154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.688 [2024-12-15 05:37:11.154160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.688 [2024-12-15 05:37:11.154166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.688 [2024-12-15 05:37:11.154180] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.688 qpair failed and we were unable to recover it. 00:36:57.688 [2024-12-15 05:37:11.164081] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.688 [2024-12-15 05:37:11.164134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.688 [2024-12-15 05:37:11.164146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.688 [2024-12-15 05:37:11.164152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.688 [2024-12-15 05:37:11.164158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.688 [2024-12-15 05:37:11.164172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.688 qpair failed and we were unable to recover it. 00:36:57.688 [2024-12-15 05:37:11.174110] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.688 [2024-12-15 05:37:11.174161] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.688 [2024-12-15 05:37:11.174176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.688 [2024-12-15 05:37:11.174182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.688 [2024-12-15 05:37:11.174188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.688 [2024-12-15 05:37:11.174202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.688 qpair failed and we were unable to recover it. 00:36:57.688 [2024-12-15 05:37:11.184167] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.688 [2024-12-15 05:37:11.184220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.688 [2024-12-15 05:37:11.184233] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.688 [2024-12-15 05:37:11.184239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.688 [2024-12-15 05:37:11.184245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.688 [2024-12-15 05:37:11.184260] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.688 qpair failed and we were unable to recover it. 00:36:57.688 [2024-12-15 05:37:11.194158] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.688 [2024-12-15 05:37:11.194208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.688 [2024-12-15 05:37:11.194220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.688 [2024-12-15 05:37:11.194226] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.688 [2024-12-15 05:37:11.194232] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.688 [2024-12-15 05:37:11.194246] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.688 qpair failed and we were unable to recover it. 00:36:57.688 [2024-12-15 05:37:11.204195] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.688 [2024-12-15 05:37:11.204252] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.688 [2024-12-15 05:37:11.204265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.688 [2024-12-15 05:37:11.204272] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.688 [2024-12-15 05:37:11.204277] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.688 [2024-12-15 05:37:11.204292] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.688 qpair failed and we were unable to recover it. 00:36:57.688 [2024-12-15 05:37:11.214216] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.688 [2024-12-15 05:37:11.214269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.688 [2024-12-15 05:37:11.214282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.214291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.214297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.689 [2024-12-15 05:37:11.214312] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.689 qpair failed and we were unable to recover it. 00:36:57.689 [2024-12-15 05:37:11.224236] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.689 [2024-12-15 05:37:11.224290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.689 [2024-12-15 05:37:11.224303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.224309] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.224315] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.689 [2024-12-15 05:37:11.224330] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.689 qpair failed and we were unable to recover it. 00:36:57.689 [2024-12-15 05:37:11.234292] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.689 [2024-12-15 05:37:11.234341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.689 [2024-12-15 05:37:11.234355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.234361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.234367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.689 [2024-12-15 05:37:11.234381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.689 qpair failed and we were unable to recover it. 00:36:57.689 [2024-12-15 05:37:11.244316] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.689 [2024-12-15 05:37:11.244372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.689 [2024-12-15 05:37:11.244385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.244391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.244397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.689 [2024-12-15 05:37:11.244412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.689 qpair failed and we were unable to recover it. 00:36:57.689 [2024-12-15 05:37:11.254313] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.689 [2024-12-15 05:37:11.254372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.689 [2024-12-15 05:37:11.254385] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.254391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.254397] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.689 [2024-12-15 05:37:11.254411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.689 qpair failed and we were unable to recover it. 00:36:57.689 [2024-12-15 05:37:11.264351] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.689 [2024-12-15 05:37:11.264402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.689 [2024-12-15 05:37:11.264415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.264421] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.264427] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.689 [2024-12-15 05:37:11.264442] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.689 qpair failed and we were unable to recover it. 00:36:57.689 [2024-12-15 05:37:11.274423] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.689 [2024-12-15 05:37:11.274488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.689 [2024-12-15 05:37:11.274501] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.274507] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.274513] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.689 [2024-12-15 05:37:11.274527] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.689 qpair failed and we were unable to recover it. 00:36:57.689 [2024-12-15 05:37:11.284424] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.689 [2024-12-15 05:37:11.284479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.689 [2024-12-15 05:37:11.284492] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.284498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.284504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.689 [2024-12-15 05:37:11.284518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.689 qpair failed and we were unable to recover it. 00:36:57.689 [2024-12-15 05:37:11.294416] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.689 [2024-12-15 05:37:11.294471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.689 [2024-12-15 05:37:11.294484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.294491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.294497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.689 [2024-12-15 05:37:11.294511] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.689 qpair failed and we were unable to recover it. 00:36:57.689 [2024-12-15 05:37:11.304475] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.689 [2024-12-15 05:37:11.304532] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.689 [2024-12-15 05:37:11.304544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.304550] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.304556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.689 [2024-12-15 05:37:11.304570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.689 qpair failed and we were unable to recover it. 00:36:57.689 [2024-12-15 05:37:11.314509] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.689 [2024-12-15 05:37:11.314561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.689 [2024-12-15 05:37:11.314573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.314579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.314585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.689 [2024-12-15 05:37:11.314599] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.689 qpair failed and we were unable to recover it. 00:36:57.689 [2024-12-15 05:37:11.324616] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.689 [2024-12-15 05:37:11.324676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.689 [2024-12-15 05:37:11.324688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.324695] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.324701] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.689 [2024-12-15 05:37:11.324715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.689 qpair failed and we were unable to recover it. 00:36:57.689 [2024-12-15 05:37:11.334547] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.689 [2024-12-15 05:37:11.334647] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.689 [2024-12-15 05:37:11.334659] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.334665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.334671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.689 [2024-12-15 05:37:11.334686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.689 qpair failed and we were unable to recover it. 00:36:57.689 [2024-12-15 05:37:11.344684] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.689 [2024-12-15 05:37:11.344737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.689 [2024-12-15 05:37:11.344751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.689 [2024-12-15 05:37:11.344760] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.689 [2024-12-15 05:37:11.344766] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.690 [2024-12-15 05:37:11.344781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.690 qpair failed and we were unable to recover it. 00:36:57.690 [2024-12-15 05:37:11.354608] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.690 [2024-12-15 05:37:11.354665] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.690 [2024-12-15 05:37:11.354677] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.690 [2024-12-15 05:37:11.354684] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.690 [2024-12-15 05:37:11.354690] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.690 [2024-12-15 05:37:11.354704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.690 qpair failed and we were unable to recover it. 00:36:57.690 [2024-12-15 05:37:11.364655] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.690 [2024-12-15 05:37:11.364711] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.690 [2024-12-15 05:37:11.364723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.690 [2024-12-15 05:37:11.364729] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.690 [2024-12-15 05:37:11.364735] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.690 [2024-12-15 05:37:11.364749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.690 qpair failed and we were unable to recover it. 00:36:57.949 [2024-12-15 05:37:11.374707] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.949 [2024-12-15 05:37:11.374762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.949 [2024-12-15 05:37:11.374774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.949 [2024-12-15 05:37:11.374781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.949 [2024-12-15 05:37:11.374787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.950 [2024-12-15 05:37:11.374801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.950 qpair failed and we were unable to recover it. 00:36:57.950 [2024-12-15 05:37:11.384702] ctrlr.c: 764:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:36:57.950 [2024-12-15 05:37:11.384751] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:36:57.950 [2024-12-15 05:37:11.384765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:36:57.950 [2024-12-15 05:37:11.384772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:36:57.950 [2024-12-15 05:37:11.384778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fa234000b90 00:36:57.950 [2024-12-15 05:37:11.384796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:36:57.950 qpair failed and we were unable to recover it. 00:36:57.950 [2024-12-15 05:37:11.384899] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:36:57.950 A controller has encountered a failure and is being reset. 00:36:57.950 Controller properly reset. 00:36:57.950 Initializing NVMe Controllers 00:36:57.950 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:57.950 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:36:57.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:36:57.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:36:57.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:36:57.950 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:36:57.950 Initialization complete. Launching workers. 00:36:57.950 Starting thread on core 1 00:36:57.950 Starting thread on core 2 00:36:57.950 Starting thread on core 3 00:36:57.950 Starting thread on core 0 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:36:57.950 00:36:57.950 real 0m10.674s 00:36:57.950 user 0m19.484s 00:36:57.950 sys 0m4.608s 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:36:57.950 ************************************ 00:36:57.950 END TEST nvmf_target_disconnect_tc2 00:36:57.950 ************************************ 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@516 -- # nvmfcleanup 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # sync 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set +e 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # for i in {1..20} 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:36:57.950 rmmod nvme_tcp 00:36:57.950 rmmod nvme_fabrics 00:36:57.950 rmmod nvme_keyring 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@128 -- # set -e 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # return 0 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@517 -- # '[' -n 543067 ']' 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@518 -- # killprocess 543067 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 543067 ']' 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 543067 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:57.950 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 543067 00:36:58.209 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:36:58.209 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:36:58.209 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 543067' 00:36:58.209 killing process with pid 543067 00:36:58.209 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 543067 00:36:58.209 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 543067 00:36:58.209 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:36:58.209 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:36:58.209 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:36:58.209 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # iptr 00:36:58.209 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-save 00:36:58.210 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:36:58.210 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@791 -- # iptables-restore 00:36:58.210 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:36:58.210 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # remove_spdk_ns 00:36:58.210 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:36:58.210 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:36:58.210 05:37:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:00.748 05:37:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:00.748 00:37:00.748 real 0m19.370s 00:37:00.748 user 0m46.888s 00:37:00.748 sys 0m9.447s 00:37:00.748 05:37:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:00.748 05:37:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:37:00.748 ************************************ 00:37:00.748 END TEST nvmf_target_disconnect 00:37:00.748 ************************************ 00:37:00.748 05:37:13 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:37:00.748 00:37:00.748 real 7m23.563s 00:37:00.748 user 16m51.471s 00:37:00.748 sys 2m9.196s 00:37:00.748 05:37:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:00.748 05:37:13 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:37:00.748 ************************************ 00:37:00.748 END TEST nvmf_host 00:37:00.748 ************************************ 00:37:00.748 05:37:13 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ tcp = \t\c\p ]] 00:37:00.748 05:37:13 nvmf_tcp -- nvmf/nvmf.sh@19 -- # [[ 0 -eq 0 ]] 00:37:00.749 05:37:13 nvmf_tcp -- nvmf/nvmf.sh@20 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:00.749 05:37:13 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:00.749 05:37:13 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:00.749 05:37:13 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:37:00.749 ************************************ 00:37:00.749 START TEST nvmf_target_core_interrupt_mode 00:37:00.749 ************************************ 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:37:00.749 * Looking for test storage... 00:37:00.749 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lcov --version 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:00.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.749 --rc genhtml_branch_coverage=1 00:37:00.749 --rc genhtml_function_coverage=1 00:37:00.749 --rc genhtml_legend=1 00:37:00.749 --rc geninfo_all_blocks=1 00:37:00.749 --rc geninfo_unexecuted_blocks=1 00:37:00.749 00:37:00.749 ' 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:00.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.749 --rc genhtml_branch_coverage=1 00:37:00.749 --rc genhtml_function_coverage=1 00:37:00.749 --rc genhtml_legend=1 00:37:00.749 --rc geninfo_all_blocks=1 00:37:00.749 --rc geninfo_unexecuted_blocks=1 00:37:00.749 00:37:00.749 ' 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:00.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.749 --rc genhtml_branch_coverage=1 00:37:00.749 --rc genhtml_function_coverage=1 00:37:00.749 --rc genhtml_legend=1 00:37:00.749 --rc geninfo_all_blocks=1 00:37:00.749 --rc geninfo_unexecuted_blocks=1 00:37:00.749 00:37:00.749 ' 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:00.749 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:00.749 --rc genhtml_branch_coverage=1 00:37:00.749 --rc genhtml_function_coverage=1 00:37:00.749 --rc genhtml_legend=1 00:37:00.749 --rc geninfo_all_blocks=1 00:37:00.749 --rc geninfo_unexecuted_blocks=1 00:37:00.749 00:37:00.749 ' 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # : 0 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:00.749 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:00.750 ************************************ 00:37:00.750 START TEST nvmf_abort 00:37:00.750 ************************************ 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:37:00.750 * Looking for test storage... 00:37:00.750 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lcov --version 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:00.750 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:01.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.010 --rc genhtml_branch_coverage=1 00:37:01.010 --rc genhtml_function_coverage=1 00:37:01.010 --rc genhtml_legend=1 00:37:01.010 --rc geninfo_all_blocks=1 00:37:01.010 --rc geninfo_unexecuted_blocks=1 00:37:01.010 00:37:01.010 ' 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:01.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.010 --rc genhtml_branch_coverage=1 00:37:01.010 --rc genhtml_function_coverage=1 00:37:01.010 --rc genhtml_legend=1 00:37:01.010 --rc geninfo_all_blocks=1 00:37:01.010 --rc geninfo_unexecuted_blocks=1 00:37:01.010 00:37:01.010 ' 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:01.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.010 --rc genhtml_branch_coverage=1 00:37:01.010 --rc genhtml_function_coverage=1 00:37:01.010 --rc genhtml_legend=1 00:37:01.010 --rc geninfo_all_blocks=1 00:37:01.010 --rc geninfo_unexecuted_blocks=1 00:37:01.010 00:37:01.010 ' 00:37:01.010 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:01.010 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:01.010 --rc genhtml_branch_coverage=1 00:37:01.010 --rc genhtml_function_coverage=1 00:37:01.010 --rc genhtml_legend=1 00:37:01.010 --rc geninfo_all_blocks=1 00:37:01.010 --rc geninfo_unexecuted_blocks=1 00:37:01.010 00:37:01.010 ' 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # : 0 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # xtrace_disable 00:37:01.011 05:37:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # pci_devs=() 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # net_devs=() 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # e810=() 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@320 -- # local -ga e810 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # x722=() 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # local -ga x722 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # mlx=() 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@322 -- # local -ga mlx 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:07.593 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:07.593 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:07.593 Found net devices under 0000:af:00.0: cvl_0_0 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:07.593 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:07.594 Found net devices under 0000:af:00.1: cvl_0_1 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@442 -- # is_hw=yes 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:07.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:07.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.330 ms 00:37:07.594 00:37:07.594 --- 10.0.0.2 ping statistics --- 00:37:07.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.594 rtt min/avg/max/mdev = 0.330/0.330/0.330/0.000 ms 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:07.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:07.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.148 ms 00:37:07.594 00:37:07.594 --- 10.0.0.1 ping statistics --- 00:37:07.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:07.594 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@450 -- # return 0 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@509 -- # nvmfpid=547668 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@510 -- # waitforlisten 547668 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 547668 ']' 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:07.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.594 [2024-12-15 05:37:20.526308] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:07.594 [2024-12-15 05:37:20.527311] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:07.594 [2024-12-15 05:37:20.527352] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:07.594 [2024-12-15 05:37:20.606580] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:07.594 [2024-12-15 05:37:20.628762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:07.594 [2024-12-15 05:37:20.628796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:07.594 [2024-12-15 05:37:20.628804] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:07.594 [2024-12-15 05:37:20.628810] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:07.594 [2024-12-15 05:37:20.628815] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:07.594 [2024-12-15 05:37:20.630159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:07.594 [2024-12-15 05:37:20.630264] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:07.594 [2024-12-15 05:37:20.630265] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:07.594 [2024-12-15 05:37:20.693427] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:07.594 [2024-12-15 05:37:20.694211] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:07.594 [2024-12-15 05:37:20.694392] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:07.594 [2024-12-15 05:37:20.694560] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.594 [2024-12-15 05:37:20.771096] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.594 Malloc0 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.594 Delay0 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:37:07.594 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.595 [2024-12-15 05:37:20.871072] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:07.595 05:37:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:37:07.595 [2024-12-15 05:37:20.992720] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:37:09.500 Initializing NVMe Controllers 00:37:09.500 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:37:09.500 controller IO queue size 128 less than required 00:37:09.500 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:37:09.500 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:37:09.500 Initialization complete. Launching workers. 00:37:09.500 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37665 00:37:09.500 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37722, failed to submit 66 00:37:09.500 success 37665, unsuccessful 57, failed 0 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@121 -- # sync 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@124 -- # set +e 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:09.500 rmmod nvme_tcp 00:37:09.500 rmmod nvme_fabrics 00:37:09.500 rmmod nvme_keyring 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@128 -- # set -e 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # return 0 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@517 -- # '[' -n 547668 ']' 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@518 -- # killprocess 547668 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 547668 ']' 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 547668 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 547668 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 547668' 00:37:09.500 killing process with pid 547668 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 547668 00:37:09.500 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 547668 00:37:09.760 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:09.760 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:09.760 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:09.760 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@297 -- # iptr 00:37:09.760 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-save 00:37:09.760 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:09.760 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@791 -- # iptables-restore 00:37:09.760 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:09.760 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:09.760 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:09.760 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:09.760 05:37:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:12.378 00:37:12.378 real 0m11.122s 00:37:12.378 user 0m10.097s 00:37:12.378 sys 0m5.661s 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:37:12.378 ************************************ 00:37:12.378 END TEST nvmf_abort 00:37:12.378 ************************************ 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:12.378 ************************************ 00:37:12.378 START TEST nvmf_ns_hotplug_stress 00:37:12.378 ************************************ 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:37:12.378 * Looking for test storage... 00:37:12.378 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lcov --version 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:37:12.378 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:12.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.379 --rc genhtml_branch_coverage=1 00:37:12.379 --rc genhtml_function_coverage=1 00:37:12.379 --rc genhtml_legend=1 00:37:12.379 --rc geninfo_all_blocks=1 00:37:12.379 --rc geninfo_unexecuted_blocks=1 00:37:12.379 00:37:12.379 ' 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:12.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.379 --rc genhtml_branch_coverage=1 00:37:12.379 --rc genhtml_function_coverage=1 00:37:12.379 --rc genhtml_legend=1 00:37:12.379 --rc geninfo_all_blocks=1 00:37:12.379 --rc geninfo_unexecuted_blocks=1 00:37:12.379 00:37:12.379 ' 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:12.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.379 --rc genhtml_branch_coverage=1 00:37:12.379 --rc genhtml_function_coverage=1 00:37:12.379 --rc genhtml_legend=1 00:37:12.379 --rc geninfo_all_blocks=1 00:37:12.379 --rc geninfo_unexecuted_blocks=1 00:37:12.379 00:37:12.379 ' 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:12.379 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:12.379 --rc genhtml_branch_coverage=1 00:37:12.379 --rc genhtml_function_coverage=1 00:37:12.379 --rc genhtml_legend=1 00:37:12.379 --rc geninfo_all_blocks=1 00:37:12.379 --rc geninfo_unexecuted_blocks=1 00:37:12.379 00:37:12.379 ' 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # : 0 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # xtrace_disable 00:37:12.379 05:37:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # pci_devs=() 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # local -a pci_devs 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # pci_net_devs=() 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # pci_drivers=() 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # local -A pci_drivers 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # net_devs=() 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@319 -- # local -ga net_devs 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # e810=() 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # local -ga e810 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # x722=() 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # local -ga x722 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # mlx=() 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@322 -- # local -ga mlx 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:37:17.654 Found 0000:af:00.0 (0x8086 - 0x159b) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:37:17.654 Found 0000:af:00.1 (0x8086 - 0x159b) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:37:17.654 Found net devices under 0000:af:00.0: cvl_0_0 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # [[ up == up ]] 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:37:17.654 Found net devices under 0000:af:00.1: cvl_0_1 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@442 -- # is_hw=yes 00:37:17.654 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:37:17.655 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:37:17.914 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:37:17.914 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.372 ms 00:37:17.914 00:37:17.914 --- 10.0.0.2 ping statistics --- 00:37:17.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:17.914 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:37:17.914 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:37:17.914 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:37:17.914 00:37:17.914 --- 10.0.0.1 ping statistics --- 00:37:17.914 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:37:17.914 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # return 0 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@509 -- # nvmfpid=551443 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@510 -- # waitforlisten 551443 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 551443 ']' 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:37:17.914 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:37:17.914 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:37:17.915 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:37:17.915 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:17.915 [2024-12-15 05:37:31.574738] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:37:17.915 [2024-12-15 05:37:31.575626] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:37:17.915 [2024-12-15 05:37:31.575658] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:37:18.177 [2024-12-15 05:37:31.650665] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:37:18.177 [2024-12-15 05:37:31.672399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:37:18.177 [2024-12-15 05:37:31.672432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:37:18.177 [2024-12-15 05:37:31.672440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:37:18.177 [2024-12-15 05:37:31.672446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:37:18.177 [2024-12-15 05:37:31.672451] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:37:18.177 [2024-12-15 05:37:31.673759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:37:18.177 [2024-12-15 05:37:31.673867] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:37:18.177 [2024-12-15 05:37:31.673868] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:37:18.177 [2024-12-15 05:37:31.735736] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:37:18.177 [2024-12-15 05:37:31.736597] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:37:18.177 [2024-12-15 05:37:31.736966] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:37:18.177 [2024-12-15 05:37:31.737092] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:37:18.177 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:37:18.177 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:37:18.177 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:37:18.177 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:37:18.177 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:18.177 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:37:18.177 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:37:18.177 05:37:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:37:18.435 [2024-12-15 05:37:31.978671] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:37:18.435 05:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:37:18.694 05:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:37:18.694 [2024-12-15 05:37:32.363030] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:37:18.694 05:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:37:18.953 05:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:37:19.212 Malloc0 00:37:19.212 05:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:37:19.471 Delay0 00:37:19.471 05:37:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:19.729 05:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:37:19.729 NULL1 00:37:19.729 05:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:37:19.988 05:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=551909 00:37:19.988 05:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:37:19.988 05:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:19.988 05:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.247 05:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:20.505 05:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:37:20.505 05:37:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:37:20.505 true 00:37:20.505 05:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:20.505 05:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:20.763 05:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:21.023 05:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:37:21.023 05:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:37:21.281 true 00:37:21.281 05:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:21.281 05:37:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:22.216 Read completed with error (sct=0, sc=11) 00:37:22.216 05:37:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:22.216 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.475 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:22.475 05:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:37:22.475 05:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:37:22.734 true 00:37:22.734 05:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:22.734 05:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:22.993 05:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:23.252 05:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:37:23.252 05:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:37:23.252 true 00:37:23.252 05:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:23.252 05:37:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:24.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.629 05:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:24.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.629 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:24.888 05:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:37:24.888 05:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:37:24.888 true 00:37:24.888 05:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:24.888 05:37:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:25.824 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:25.824 05:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:26.083 05:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:37:26.083 05:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:37:26.083 true 00:37:26.341 05:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:26.341 05:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:26.341 05:37:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:26.599 05:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:37:26.599 05:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:37:26.858 true 00:37:26.858 05:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:26.858 05:37:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:28.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.235 05:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:28.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.235 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:28.235 05:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:37:28.235 05:37:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:37:28.494 true 00:37:28.494 05:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:28.494 05:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.431 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:29.431 05:37:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:29.431 05:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:37:29.431 05:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:37:29.690 true 00:37:29.690 05:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:29.690 05:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:29.948 05:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:30.207 05:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:37:30.207 05:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:37:30.207 true 00:37:30.207 05:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:30.207 05:37:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:31.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.598 05:37:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:31.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.598 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:31.599 05:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:37:31.599 05:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:37:31.861 true 00:37:31.861 05:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:31.861 05:37:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:32.796 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:32.796 05:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:32.796 05:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:37:32.796 05:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:37:33.055 true 00:37:33.055 05:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:33.055 05:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:33.313 05:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:33.313 05:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:37:33.313 05:37:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:37:33.572 true 00:37:33.572 05:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:33.572 05:37:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:34.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.951 05:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:34.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:34.951 05:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:37:34.951 05:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:37:35.210 true 00:37:35.210 05:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:35.210 05:37:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:36.144 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:36.144 05:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:36.144 05:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:37:36.144 05:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:37:36.403 true 00:37:36.403 05:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:36.403 05:37:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:36.661 05:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:36.920 05:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:37:36.920 05:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:37:36.920 true 00:37:36.920 05:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:36.920 05:37:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:38.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:38.346 05:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:38.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:38.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:38.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:38.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:38.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:38.346 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:38.346 05:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:37:38.346 05:37:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:37:38.605 true 00:37:38.605 05:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:38.605 05:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:39.542 05:37:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:39.542 05:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:37:39.542 05:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:37:39.801 true 00:37:39.801 05:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:39.801 05:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:40.060 05:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:40.319 05:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:37:40.319 05:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:37:40.319 true 00:37:40.319 05:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:40.319 05:37:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:41.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:41.696 05:37:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:41.696 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:41.696 05:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:37:41.696 05:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:37:41.696 true 00:37:41.696 05:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:41.696 05:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:41.954 05:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:42.212 05:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:37:42.212 05:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:37:42.470 true 00:37:42.470 05:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:42.470 05:37:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:43.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:43.847 05:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:43.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:43.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:43.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:43.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:43.847 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:43.847 05:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:37:43.847 05:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:37:43.847 true 00:37:44.113 05:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:44.113 05:37:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:44.682 05:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:44.941 05:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:37:44.941 05:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:37:45.199 true 00:37:45.199 05:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:45.199 05:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:45.458 05:37:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:45.716 05:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:37:45.716 05:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:37:45.716 true 00:37:45.716 05:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:45.716 05:37:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:46.653 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:46.912 05:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:46.912 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:46.912 05:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:37:46.912 05:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:37:47.171 true 00:37:47.171 05:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:47.171 05:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:47.430 05:38:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:47.689 05:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:37:47.689 05:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:37:47.689 true 00:37:47.689 05:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:47.689 05:38:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:49.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.065 05:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:49.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.065 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:37:49.065 05:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:37:49.065 05:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:37:49.324 true 00:37:49.324 05:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:49.324 05:38:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:50.261 05:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:37:50.261 05:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:37:50.261 05:38:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:37:50.261 Initializing NVMe Controllers 00:37:50.261 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:37:50.261 Controller IO queue size 128, less than required. 00:37:50.261 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:50.261 Controller IO queue size 128, less than required. 00:37:50.261 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:37:50.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:37:50.261 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:37:50.261 Initialization complete. Launching workers. 00:37:50.261 ======================================================== 00:37:50.261 Latency(us) 00:37:50.261 Device Information : IOPS MiB/s Average min max 00:37:50.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2067.10 1.01 40510.95 2840.42 1012852.33 00:37:50.261 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17455.12 8.52 7315.29 1554.13 373671.34 00:37:50.261 ======================================================== 00:37:50.261 Total : 19522.22 9.53 10830.20 1554.13 1012852.33 00:37:50.261 00:37:50.521 true 00:37:50.521 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 551909 00:37:50.521 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (551909) - No such process 00:37:50.521 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 551909 00:37:50.521 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:50.780 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:51.039 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:37:51.039 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:37:51.039 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:37:51.039 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:51.039 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:37:51.039 null0 00:37:51.039 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:51.039 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:51.039 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:37:51.298 null1 00:37:51.298 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:51.298 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:51.298 05:38:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:37:51.564 null2 00:37:51.564 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:51.564 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:51.564 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:37:51.564 null3 00:37:51.564 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:51.564 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:51.564 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:37:51.826 null4 00:37:51.826 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:51.826 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:51.826 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:37:52.084 null5 00:37:52.084 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:52.084 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:52.084 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:37:52.084 null6 00:37:52.084 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:52.084 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:52.084 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:37:52.343 null7 00:37:52.343 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:37:52.343 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:37:52.343 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:37:52.343 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:52.343 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:37:52.343 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:52.343 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:37:52.343 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:52.343 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 556961 556964 556966 556969 556972 556975 556979 556981 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.344 05:38:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:52.603 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:52.603 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:52.603 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:52.603 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:52.603 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:52.603 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:52.603 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:52.603 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:52.862 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.862 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.862 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:52.862 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.862 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.862 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:52.863 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.120 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:53.378 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:53.378 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:53.378 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:53.378 05:38:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:53.378 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:53.378 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:53.378 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:53.378 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:53.636 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.636 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.636 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:53.636 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.636 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.636 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.636 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:53.636 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:53.637 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:53.895 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:53.895 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:53.895 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:53.895 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:53.895 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:53.895 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:53.895 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:53.895 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:54.155 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:54.414 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:54.414 05:38:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.414 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:54.673 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:54.673 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:54.673 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:54.673 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:54.673 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:54.673 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:54.673 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:54.673 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:54.932 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.191 05:38:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:55.450 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:55.450 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:55.450 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:55.450 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:55.450 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:55.450 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:55.450 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:55.450 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:55.709 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:55.968 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:55.968 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:55.968 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:55.968 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:55.968 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:55.968 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:55.968 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:55.968 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:37:56.227 05:38:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@516 -- # nvmfcleanup 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # sync 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set +e 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # for i in {1..20} 00:37:56.487 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:37:56.487 rmmod nvme_tcp 00:37:56.487 rmmod nvme_fabrics 00:37:56.746 rmmod nvme_keyring 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@128 -- # set -e 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # return 0 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@517 -- # '[' -n 551443 ']' 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@518 -- # killprocess 551443 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 551443 ']' 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 551443 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 551443 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 551443' 00:37:56.746 killing process with pid 551443 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 551443 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 551443 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:37:56.746 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:37:56.747 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:37:56.747 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # iptr 00:37:56.747 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-save 00:37:56.747 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:37:56.747 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@791 -- # iptables-restore 00:37:57.005 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:37:57.005 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # remove_spdk_ns 00:37:57.005 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:57.005 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:57.005 05:38:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:58.911 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:37:58.911 00:37:58.911 real 0m47.031s 00:37:58.911 user 2m57.259s 00:37:58.911 sys 0m19.892s 00:37:58.911 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:37:58.911 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:37:58.911 ************************************ 00:37:58.911 END TEST nvmf_ns_hotplug_stress 00:37:58.911 ************************************ 00:37:58.911 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:58.911 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:37:58.911 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:37:58.911 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:37:58.911 ************************************ 00:37:58.911 START TEST nvmf_delete_subsystem 00:37:58.911 ************************************ 00:37:58.911 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:37:59.172 * Looking for test storage... 00:37:59.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lcov --version 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:37:59.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.172 --rc genhtml_branch_coverage=1 00:37:59.172 --rc genhtml_function_coverage=1 00:37:59.172 --rc genhtml_legend=1 00:37:59.172 --rc geninfo_all_blocks=1 00:37:59.172 --rc geninfo_unexecuted_blocks=1 00:37:59.172 00:37:59.172 ' 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:37:59.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.172 --rc genhtml_branch_coverage=1 00:37:59.172 --rc genhtml_function_coverage=1 00:37:59.172 --rc genhtml_legend=1 00:37:59.172 --rc geninfo_all_blocks=1 00:37:59.172 --rc geninfo_unexecuted_blocks=1 00:37:59.172 00:37:59.172 ' 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:37:59.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.172 --rc genhtml_branch_coverage=1 00:37:59.172 --rc genhtml_function_coverage=1 00:37:59.172 --rc genhtml_legend=1 00:37:59.172 --rc geninfo_all_blocks=1 00:37:59.172 --rc geninfo_unexecuted_blocks=1 00:37:59.172 00:37:59.172 ' 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:37:59.172 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:37:59.172 --rc genhtml_branch_coverage=1 00:37:59.172 --rc genhtml_function_coverage=1 00:37:59.172 --rc genhtml_legend=1 00:37:59.172 --rc geninfo_all_blocks=1 00:37:59.172 --rc geninfo_unexecuted_blocks=1 00:37:59.172 00:37:59.172 ' 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:37:59.172 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # : 0 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@55 -- # have_pci_nics=0 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@476 -- # prepare_net_devs 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@438 -- # local -g is_hw=no 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@440 -- # remove_spdk_ns 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # xtrace_disable 00:37:59.173 05:38:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # pci_devs=() 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # net_devs=() 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # e810=() 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # local -ga e810 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # x722=() 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # local -ga x722 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # mlx=() 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@322 -- # local -ga mlx 00:38:05.741 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:05.742 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:05.742 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:05.742 Found net devices under 0000:af:00.0: cvl_0_0 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:05.742 Found net devices under 0000:af:00.1: cvl_0_1 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@442 -- # is_hw=yes 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:05.742 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:05.742 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.296 ms 00:38:05.742 00:38:05.742 --- 10.0.0.2 ping statistics --- 00:38:05.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.742 rtt min/avg/max/mdev = 0.296/0.296/0.296/0.000 ms 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:05.742 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:05.742 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:38:05.742 00:38:05.742 --- 10.0.0.1 ping statistics --- 00:38:05.742 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:05.742 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # return 0 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:05.742 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@509 -- # nvmfpid=561174 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@510 -- # waitforlisten 561174 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 561174 ']' 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:05.743 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:05.743 [2024-12-15 05:38:18.681919] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:05.743 [2024-12-15 05:38:18.682856] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:05.743 [2024-12-15 05:38:18.682891] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:05.743 [2024-12-15 05:38:18.757868] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:38:05.743 [2024-12-15 05:38:18.779496] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:05.743 [2024-12-15 05:38:18.779536] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:05.743 [2024-12-15 05:38:18.779543] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:05.743 [2024-12-15 05:38:18.779550] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:05.743 [2024-12-15 05:38:18.779555] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:05.743 [2024-12-15 05:38:18.780631] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:05.743 [2024-12-15 05:38:18.780634] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:05.743 [2024-12-15 05:38:18.842722] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:05.743 [2024-12-15 05:38:18.843273] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:05.743 [2024-12-15 05:38:18.843445] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:05.743 [2024-12-15 05:38:18.921405] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:05.743 [2024-12-15 05:38:18.953692] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:05.743 NULL1 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:05.743 Delay0 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=561349 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:38:05.743 05:38:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:05.743 [2024-12-15 05:38:19.064465] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:07.648 05:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:38:07.648 05:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:07.648 05:38:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 [2024-12-15 05:38:21.115755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1239140 is same with the state(6) to be set 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 Read completed with error (sct=0, sc=8) 00:38:07.648 starting I/O failed: -6 00:38:07.648 Write completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 starting I/O failed: -6 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 starting I/O failed: -6 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 starting I/O failed: -6 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 starting I/O failed: -6 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 starting I/O failed: -6 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 starting I/O failed: -6 00:38:07.649 [2024-12-15 05:38:21.116094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe43c00d060 is same with the state(6) to be set 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Write completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:07.649 Read completed with error (sct=0, sc=8) 00:38:08.587 [2024-12-15 05:38:22.077350] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1236260 is same with the state(6) to be set 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 [2024-12-15 05:38:22.116651] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7fe43c00d390 is same with the state(6) to be set 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 [2024-12-15 05:38:22.118199] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d920 is same with the state(6) to be set 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Write completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 [2024-12-15 05:38:22.118359] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x128d5f0 is same with the state(6) to be set 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.587 Read completed with error (sct=0, sc=8) 00:38:08.588 Write completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Write completed with error (sct=0, sc=8) 00:38:08.588 Write completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Write completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Write completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Write completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Write completed with error (sct=0, sc=8) 00:38:08.588 Write completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Write completed with error (sct=0, sc=8) 00:38:08.588 Write completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Read completed with error (sct=0, sc=8) 00:38:08.588 Write completed with error (sct=0, sc=8) 00:38:08.588 Write completed with error (sct=0, sc=8) 00:38:08.588 [2024-12-15 05:38:22.119056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1238c60 is same with the state(6) to be set 00:38:08.588 Initializing NVMe Controllers 00:38:08.588 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:08.588 Controller IO queue size 128, less than required. 00:38:08.588 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:08.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:08.588 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:08.588 Initialization complete. Launching workers. 00:38:08.588 ======================================================== 00:38:08.588 Latency(us) 00:38:08.588 Device Information : IOPS MiB/s Average min max 00:38:08.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 195.47 0.10 943817.85 611.54 1011832.76 00:38:08.588 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 156.77 0.08 869738.70 229.15 1011641.89 00:38:08.588 ======================================================== 00:38:08.588 Total : 352.24 0.17 910847.41 229.15 1011832.76 00:38:08.588 00:38:08.588 [2024-12-15 05:38:22.119721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1236260 (9): Bad file descriptor 00:38:08.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:38:08.588 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:08.588 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:38:08.588 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 561349 00:38:08.588 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 561349 00:38:09.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (561349) - No such process 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 561349 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 561349 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 561349 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.156 [2024-12-15 05:38:22.653672] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=561858 00:38:09.156 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:38:09.157 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:38:09.157 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561858 00:38:09.157 05:38:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:09.157 [2024-12-15 05:38:22.744998] subsystem.c:1641:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:38:09.723 05:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:09.723 05:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561858 00:38:09.723 05:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:10.288 05:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:10.288 05:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561858 00:38:10.288 05:38:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:10.547 05:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:10.547 05:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561858 00:38:10.547 05:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:11.115 05:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:11.115 05:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561858 00:38:11.115 05:38:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:11.682 05:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:11.682 05:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561858 00:38:11.682 05:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:12.247 05:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:12.247 05:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561858 00:38:12.247 05:38:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:38:12.247 Initializing NVMe Controllers 00:38:12.247 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:38:12.247 Controller IO queue size 128, less than required. 00:38:12.247 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:12.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:38:12.247 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:38:12.247 Initialization complete. Launching workers. 00:38:12.247 ======================================================== 00:38:12.247 Latency(us) 00:38:12.247 Device Information : IOPS MiB/s Average min max 00:38:12.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1003233.83 1000146.87 1042471.79 00:38:12.247 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003585.73 1000219.52 1041075.78 00:38:12.247 ======================================================== 00:38:12.247 Total : 256.00 0.12 1003409.78 1000146.87 1042471.79 00:38:12.247 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 561858 00:38:12.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (561858) - No such process 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 561858 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # sync 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set +e 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:12.813 rmmod nvme_tcp 00:38:12.813 rmmod nvme_fabrics 00:38:12.813 rmmod nvme_keyring 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@128 -- # set -e 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # return 0 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@517 -- # '[' -n 561174 ']' 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@518 -- # killprocess 561174 00:38:12.813 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 561174 ']' 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 561174 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 561174 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 561174' 00:38:12.814 killing process with pid 561174 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 561174 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 561174 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # iptr 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-save 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@791 -- # iptables-restore 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:12.814 05:38:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:15.349 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:15.349 00:38:15.349 real 0m15.967s 00:38:15.349 user 0m25.933s 00:38:15.349 sys 0m5.967s 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:38:15.350 ************************************ 00:38:15.350 END TEST nvmf_delete_subsystem 00:38:15.350 ************************************ 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:15.350 ************************************ 00:38:15.350 START TEST nvmf_host_management 00:38:15.350 ************************************ 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:38:15.350 * Looking for test storage... 00:38:15.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lcov --version 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.350 --rc genhtml_branch_coverage=1 00:38:15.350 --rc genhtml_function_coverage=1 00:38:15.350 --rc genhtml_legend=1 00:38:15.350 --rc geninfo_all_blocks=1 00:38:15.350 --rc geninfo_unexecuted_blocks=1 00:38:15.350 00:38:15.350 ' 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.350 --rc genhtml_branch_coverage=1 00:38:15.350 --rc genhtml_function_coverage=1 00:38:15.350 --rc genhtml_legend=1 00:38:15.350 --rc geninfo_all_blocks=1 00:38:15.350 --rc geninfo_unexecuted_blocks=1 00:38:15.350 00:38:15.350 ' 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.350 --rc genhtml_branch_coverage=1 00:38:15.350 --rc genhtml_function_coverage=1 00:38:15.350 --rc genhtml_legend=1 00:38:15.350 --rc geninfo_all_blocks=1 00:38:15.350 --rc geninfo_unexecuted_blocks=1 00:38:15.350 00:38:15.350 ' 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:15.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:15.350 --rc genhtml_branch_coverage=1 00:38:15.350 --rc genhtml_function_coverage=1 00:38:15.350 --rc genhtml_legend=1 00:38:15.350 --rc geninfo_all_blocks=1 00:38:15.350 --rc geninfo_unexecuted_blocks=1 00:38:15.350 00:38:15.350 ' 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.350 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # : 0 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # xtrace_disable 00:38:15.351 05:38:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # pci_devs=() 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # net_devs=() 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # e810=() 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@320 -- # local -ga e810 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # x722=() 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # local -ga x722 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # mlx=() 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@322 -- # local -ga mlx 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:21.923 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:21.924 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:21.924 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:21.924 Found net devices under 0000:af:00.0: cvl_0_0 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:21.924 Found net devices under 0000:af:00.1: cvl_0_1 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@442 -- # is_hw=yes 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:21.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:21.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.382 ms 00:38:21.924 00:38:21.924 --- 10.0.0.2 ping statistics --- 00:38:21.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:21.924 rtt min/avg/max/mdev = 0.382/0.382/0.382/0.000 ms 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:21.924 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:21.924 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.238 ms 00:38:21.924 00:38:21.924 --- 10.0.0.1 ping statistics --- 00:38:21.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:21.924 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@450 -- # return 0 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:21.924 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@509 -- # nvmfpid=565863 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@510 -- # waitforlisten 565863 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 565863 ']' 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:21.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:21.925 [2024-12-15 05:38:34.749912] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:21.925 [2024-12-15 05:38:34.750808] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:21.925 [2024-12-15 05:38:34.750844] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:21.925 [2024-12-15 05:38:34.826070] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:38:21.925 [2024-12-15 05:38:34.849657] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:21.925 [2024-12-15 05:38:34.849692] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:21.925 [2024-12-15 05:38:34.849699] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:21.925 [2024-12-15 05:38:34.849705] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:21.925 [2024-12-15 05:38:34.849710] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:21.925 [2024-12-15 05:38:34.851224] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:21.925 [2024-12-15 05:38:34.851331] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:38:21.925 [2024-12-15 05:38:34.851440] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:21.925 [2024-12-15 05:38:34.851441] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:38:21.925 [2024-12-15 05:38:34.915610] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:21.925 [2024-12-15 05:38:34.916387] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:21.925 [2024-12-15 05:38:34.916785] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:38:21.925 [2024-12-15 05:38:34.917168] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:21.925 [2024-12-15 05:38:34.917206] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.925 05:38:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:21.925 [2024-12-15 05:38:34.996307] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:21.925 Malloc0 00:38:21.925 [2024-12-15 05:38:35.096481] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=566031 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 566031 /var/tmp/bdevperf.sock 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 566031 ']' 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:21.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:21.925 { 00:38:21.925 "params": { 00:38:21.925 "name": "Nvme$subsystem", 00:38:21.925 "trtype": "$TEST_TRANSPORT", 00:38:21.925 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:21.925 "adrfam": "ipv4", 00:38:21.925 "trsvcid": "$NVMF_PORT", 00:38:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:21.925 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:21.925 "hdgst": ${hdgst:-false}, 00:38:21.925 "ddgst": ${ddgst:-false} 00:38:21.925 }, 00:38:21.925 "method": "bdev_nvme_attach_controller" 00:38:21.925 } 00:38:21.925 EOF 00:38:21.925 )") 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:21.925 "params": { 00:38:21.925 "name": "Nvme0", 00:38:21.925 "trtype": "tcp", 00:38:21.925 "traddr": "10.0.0.2", 00:38:21.925 "adrfam": "ipv4", 00:38:21.925 "trsvcid": "4420", 00:38:21.925 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:21.925 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:21.925 "hdgst": false, 00:38:21.925 "ddgst": false 00:38:21.925 }, 00:38:21.925 "method": "bdev_nvme_attach_controller" 00:38:21.925 }' 00:38:21.925 [2024-12-15 05:38:35.191602] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:21.925 [2024-12-15 05:38:35.191649] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid566031 ] 00:38:21.925 [2024-12-15 05:38:35.264557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:21.925 [2024-12-15 05:38:35.286840] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:21.925 Running I/O for 10 seconds... 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:38:21.925 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=131 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 131 -ge 100 ']' 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:21.926 [2024-12-15 05:38:35.579917] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.579953] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.579961] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.579968] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.579974] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.579981] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.579986] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.579996] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580002] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580008] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580014] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580020] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580026] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580032] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580038] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580044] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580051] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580057] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580062] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580069] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580075] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580081] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580087] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.580093] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x13cc370 is same with the state(6) to be set 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:38:21.926 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:38:21.926 [2024-12-15 05:38:35.585639] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:38:21.926 [2024-12-15 05:38:35.585670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585679] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:38:21.926 [2024-12-15 05:38:35.585686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585694] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:38:21.926 [2024-12-15 05:38:35.585701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585708] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:38:21.926 [2024-12-15 05:38:35.585714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xcfdd40 is same with the state(6) to be set 00:38:21.926 [2024-12-15 05:38:35.585773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.926 [2024-12-15 05:38:35.585783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.926 [2024-12-15 05:38:35.585802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.926 [2024-12-15 05:38:35.585818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.926 [2024-12-15 05:38:35.585833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.926 [2024-12-15 05:38:35.585847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.926 [2024-12-15 05:38:35.585861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.926 [2024-12-15 05:38:35.585876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.926 [2024-12-15 05:38:35.585895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.926 [2024-12-15 05:38:35.585910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.926 [2024-12-15 05:38:35.585925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.926 [2024-12-15 05:38:35.585939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.926 [2024-12-15 05:38:35.585956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.926 [2024-12-15 05:38:35.585965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.926 [2024-12-15 05:38:35.585971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.585980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.585986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.585999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:21.927 [2024-12-15 05:38:35.586089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.927 [2024-12-15 05:38:35.586567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.927 [2024-12-15 05:38:35.586575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.928 [2024-12-15 05:38:35.586581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.928 [2024-12-15 05:38:35.586589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.928 [2024-12-15 05:38:35.586595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.928 [2024-12-15 05:38:35.586603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.928 [2024-12-15 05:38:35.586609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.928 [2024-12-15 05:38:35.586617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.928 [2024-12-15 05:38:35.586623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.928 [2024-12-15 05:38:35.586633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.928 [2024-12-15 05:38:35.586639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.928 [2024-12-15 05:38:35.586647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.928 [2024-12-15 05:38:35.586653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.928 [2024-12-15 05:38:35.586661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.928 [2024-12-15 05:38:35.586667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.928 [2024-12-15 05:38:35.586675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.928 [2024-12-15 05:38:35.586681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.928 [2024-12-15 05:38:35.586689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.928 [2024-12-15 05:38:35.586696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.928 [2024-12-15 05:38:35.586704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.928 [2024-12-15 05:38:35.586710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.928 [2024-12-15 05:38:35.586717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:38:21.928 [2024-12-15 05:38:35.586724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.928 [2024-12-15 05:38:35.587646] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:38:21.928 task offset: 24576 on job bdev=Nvme0n1 fails 00:38:21.928 00:38:21.928 Latency(us) 00:38:21.928 [2024-12-15T04:38:35.615Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:21.928 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:21.928 Job: Nvme0n1 ended in about 0.11 seconds with error 00:38:21.928 Verification LBA range: start 0x0 length 0x400 00:38:21.928 Nvme0n1 : 0.11 1737.45 108.59 579.15 0.00 25508.97 1599.39 26588.89 00:38:21.928 [2024-12-15T04:38:35.615Z] =================================================================================================================== 00:38:21.928 [2024-12-15T04:38:35.615Z] Total : 1737.45 108.59 579.15 0.00 25508.97 1599.39 26588.89 00:38:21.928 [2024-12-15 05:38:35.589965] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:38:21.928 [2024-12-15 05:38:35.589984] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfdd40 (9): Bad file descriptor 00:38:21.928 [2024-12-15 05:38:35.590908] ctrlr.c: 825:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:38:21.928 [2024-12-15 05:38:35.590979] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:38:21.928 [2024-12-15 05:38:35.591006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:38:21.928 [2024-12-15 05:38:35.591022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:38:21.928 [2024-12-15 05:38:35.591033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:38:21.928 [2024-12-15 05:38:35.591039] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:38:21.928 [2024-12-15 05:38:35.591046] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xcfdd40 00:38:21.928 [2024-12-15 05:38:35.591064] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xcfdd40 (9): Bad file descriptor 00:38:21.928 [2024-12-15 05:38:35.591075] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:38:21.928 [2024-12-15 05:38:35.591082] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:38:21.928 [2024-12-15 05:38:35.591090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:38:21.928 [2024-12-15 05:38:35.591098] bdev_nvme.c:2285:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:38:21.928 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:38:21.928 05:38:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:38:23.306 05:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 566031 00:38:23.306 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (566031) - No such process 00:38:23.306 05:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:38:23.306 05:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:38:23.306 05:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:38:23.306 05:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:38:23.306 05:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # config=() 00:38:23.306 05:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@560 -- # local subsystem config 00:38:23.306 05:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:38:23.306 05:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:38:23.306 { 00:38:23.306 "params": { 00:38:23.306 "name": "Nvme$subsystem", 00:38:23.306 "trtype": "$TEST_TRANSPORT", 00:38:23.306 "traddr": "$NVMF_FIRST_TARGET_IP", 00:38:23.306 "adrfam": "ipv4", 00:38:23.306 "trsvcid": "$NVMF_PORT", 00:38:23.306 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:38:23.306 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:38:23.306 "hdgst": ${hdgst:-false}, 00:38:23.306 "ddgst": ${ddgst:-false} 00:38:23.306 }, 00:38:23.306 "method": "bdev_nvme_attach_controller" 00:38:23.306 } 00:38:23.306 EOF 00:38:23.306 )") 00:38:23.306 05:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@582 -- # cat 00:38:23.306 05:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@584 -- # jq . 00:38:23.306 05:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@585 -- # IFS=, 00:38:23.306 05:38:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:38:23.306 "params": { 00:38:23.306 "name": "Nvme0", 00:38:23.306 "trtype": "tcp", 00:38:23.306 "traddr": "10.0.0.2", 00:38:23.306 "adrfam": "ipv4", 00:38:23.306 "trsvcid": "4420", 00:38:23.306 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:23.306 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:38:23.306 "hdgst": false, 00:38:23.306 "ddgst": false 00:38:23.306 }, 00:38:23.306 "method": "bdev_nvme_attach_controller" 00:38:23.306 }' 00:38:23.306 [2024-12-15 05:38:36.649799] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:23.306 [2024-12-15 05:38:36.649846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid566263 ] 00:38:23.306 [2024-12-15 05:38:36.724099] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:23.306 [2024-12-15 05:38:36.745080] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:23.565 Running I/O for 1 seconds... 00:38:24.501 1995.00 IOPS, 124.69 MiB/s 00:38:24.501 Latency(us) 00:38:24.501 [2024-12-15T04:38:38.188Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:24.501 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:38:24.501 Verification LBA range: start 0x0 length 0x400 00:38:24.501 Nvme0n1 : 1.02 2043.90 127.74 0.00 0.00 30708.03 3027.14 26838.55 00:38:24.501 [2024-12-15T04:38:38.188Z] =================================================================================================================== 00:38:24.501 [2024-12-15T04:38:38.188Z] Total : 2043.90 127.74 0.00 0.00 30708.03 3027.14 26838.55 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@121 -- # sync 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@124 -- # set +e 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:24.760 rmmod nvme_tcp 00:38:24.760 rmmod nvme_fabrics 00:38:24.760 rmmod nvme_keyring 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@128 -- # set -e 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # return 0 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@517 -- # '[' -n 565863 ']' 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@518 -- # killprocess 565863 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 565863 ']' 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 565863 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 565863 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 565863' 00:38:24.760 killing process with pid 565863 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 565863 00:38:24.760 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 565863 00:38:25.019 [2024-12-15 05:38:38.453823] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:38:25.019 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:25.019 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:25.019 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:25.019 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@297 -- # iptr 00:38:25.019 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:25.019 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-save 00:38:25.019 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@791 -- # iptables-restore 00:38:25.019 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:25.019 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:25.019 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:25.019 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:25.019 05:38:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:26.923 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:26.923 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:38:26.923 00:38:26.923 real 0m11.947s 00:38:26.923 user 0m16.324s 00:38:26.923 sys 0m6.015s 00:38:26.923 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:26.923 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:38:26.923 ************************************ 00:38:26.923 END TEST nvmf_host_management 00:38:26.923 ************************************ 00:38:26.923 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:26.923 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:26.923 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:26.923 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:27.182 ************************************ 00:38:27.182 START TEST nvmf_lvol 00:38:27.182 ************************************ 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:38:27.182 * Looking for test storage... 00:38:27.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lcov --version 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:27.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.182 --rc genhtml_branch_coverage=1 00:38:27.182 --rc genhtml_function_coverage=1 00:38:27.182 --rc genhtml_legend=1 00:38:27.182 --rc geninfo_all_blocks=1 00:38:27.182 --rc geninfo_unexecuted_blocks=1 00:38:27.182 00:38:27.182 ' 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:27.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.182 --rc genhtml_branch_coverage=1 00:38:27.182 --rc genhtml_function_coverage=1 00:38:27.182 --rc genhtml_legend=1 00:38:27.182 --rc geninfo_all_blocks=1 00:38:27.182 --rc geninfo_unexecuted_blocks=1 00:38:27.182 00:38:27.182 ' 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:27.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.182 --rc genhtml_branch_coverage=1 00:38:27.182 --rc genhtml_function_coverage=1 00:38:27.182 --rc genhtml_legend=1 00:38:27.182 --rc geninfo_all_blocks=1 00:38:27.182 --rc geninfo_unexecuted_blocks=1 00:38:27.182 00:38:27.182 ' 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:27.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:27.182 --rc genhtml_branch_coverage=1 00:38:27.182 --rc genhtml_function_coverage=1 00:38:27.182 --rc genhtml_legend=1 00:38:27.182 --rc geninfo_all_blocks=1 00:38:27.182 --rc geninfo_unexecuted_blocks=1 00:38:27.182 00:38:27.182 ' 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:27.182 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # : 0 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # xtrace_disable 00:38:27.183 05:38:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # pci_devs=() 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # net_devs=() 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # e810=() 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@320 -- # local -ga e810 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # x722=() 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # local -ga x722 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # mlx=() 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@322 -- # local -ga mlx 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:33.751 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:33.751 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:33.751 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:33.752 Found net devices under 0000:af:00.0: cvl_0_0 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:33.752 Found net devices under 0000:af:00.1: cvl_0_1 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@442 -- # is_hw=yes 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:33.752 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:33.752 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.419 ms 00:38:33.752 00:38:33.752 --- 10.0.0.2 ping statistics --- 00:38:33.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.752 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:33.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:33.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.204 ms 00:38:33.752 00:38:33.752 --- 10.0.0.1 ping statistics --- 00:38:33.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:33.752 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@450 -- # return 0 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@509 -- # nvmfpid=569952 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@510 -- # waitforlisten 569952 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 569952 ']' 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:33.752 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:33.752 [2024-12-15 05:38:46.745196] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:33.752 [2024-12-15 05:38:46.746167] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:33.752 [2024-12-15 05:38:46.746205] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:33.752 [2024-12-15 05:38:46.824699] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:38:33.752 [2024-12-15 05:38:46.847669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:33.752 [2024-12-15 05:38:46.847706] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:33.752 [2024-12-15 05:38:46.847712] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:33.752 [2024-12-15 05:38:46.847718] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:33.752 [2024-12-15 05:38:46.847723] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:33.752 [2024-12-15 05:38:46.848866] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:33.752 [2024-12-15 05:38:46.848978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:33.752 [2024-12-15 05:38:46.848978] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:38:33.752 [2024-12-15 05:38:46.912434] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:33.752 [2024-12-15 05:38:46.913277] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:38:33.752 [2024-12-15 05:38:46.913529] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:33.752 [2024-12-15 05:38:46.913692] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:33.752 05:38:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:33.752 [2024-12-15 05:38:47.145652] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:33.752 05:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:33.752 05:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:38:33.752 05:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:38:34.011 05:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:38:34.011 05:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:38:34.271 05:38:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:38:34.530 05:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=a3622910-cbe4-45fc-b6b2-34d0a1750618 00:38:34.530 05:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u a3622910-cbe4-45fc-b6b2-34d0a1750618 lvol 20 00:38:34.788 05:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=19c919a2-abb5-4b2a-afbb-39850db50bdd 00:38:34.788 05:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:34.788 05:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 19c919a2-abb5-4b2a-afbb-39850db50bdd 00:38:35.047 05:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:35.305 [2024-12-15 05:38:48.773518] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:35.305 05:38:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:35.564 05:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=570238 00:38:35.564 05:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:38:35.564 05:38:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:38:36.499 05:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 19c919a2-abb5-4b2a-afbb-39850db50bdd MY_SNAPSHOT 00:38:36.758 05:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4ec63964-e42c-4e8c-973f-3aa62fbd7db1 00:38:36.758 05:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 19c919a2-abb5-4b2a-afbb-39850db50bdd 30 00:38:37.016 05:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4ec63964-e42c-4e8c-973f-3aa62fbd7db1 MY_CLONE 00:38:37.274 05:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=4b87f14e-e71f-4db5-832e-6754d5a3de92 00:38:37.274 05:38:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 4b87f14e-e71f-4db5-832e-6754d5a3de92 00:38:37.532 05:38:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 570238 00:38:47.509 Initializing NVMe Controllers 00:38:47.509 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:38:47.509 Controller IO queue size 128, less than required. 00:38:47.509 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:38:47.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:38:47.509 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:38:47.509 Initialization complete. Launching workers. 00:38:47.509 ======================================================== 00:38:47.509 Latency(us) 00:38:47.509 Device Information : IOPS MiB/s Average min max 00:38:47.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12071.79 47.16 10604.68 1507.12 58200.33 00:38:47.509 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12266.39 47.92 10436.94 2165.73 50661.90 00:38:47.509 ======================================================== 00:38:47.509 Total : 24338.18 95.07 10520.14 1507.12 58200.33 00:38:47.509 00:38:47.509 05:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:38:47.509 05:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 19c919a2-abb5-4b2a-afbb-39850db50bdd 00:38:47.509 05:38:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u a3622910-cbe4-45fc-b6b2-34d0a1750618 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@516 -- # nvmfcleanup 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@121 -- # sync 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@124 -- # set +e 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # for i in {1..20} 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:38:47.509 rmmod nvme_tcp 00:38:47.509 rmmod nvme_fabrics 00:38:47.509 rmmod nvme_keyring 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@128 -- # set -e 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # return 0 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@517 -- # '[' -n 569952 ']' 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@518 -- # killprocess 569952 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 569952 ']' 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 569952 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 569952 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 569952' 00:38:47.509 killing process with pid 569952 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 569952 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 569952 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@297 -- # iptr 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-save 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@791 -- # iptables-restore 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@302 -- # remove_spdk_ns 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:47.509 05:39:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:48.887 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:38:48.887 00:38:48.887 real 0m21.867s 00:38:48.887 user 0m56.008s 00:38:48.887 sys 0m9.772s 00:38:48.887 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:38:48.887 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:38:48.887 ************************************ 00:38:48.887 END TEST nvmf_lvol 00:38:48.887 ************************************ 00:38:48.887 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:48.887 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:38:48.887 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:48.887 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:38:48.887 ************************************ 00:38:48.887 START TEST nvmf_lvs_grow 00:38:48.887 ************************************ 00:38:48.887 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:38:49.147 * Looking for test storage... 00:38:49.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lcov --version 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:38:49.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.147 --rc genhtml_branch_coverage=1 00:38:49.147 --rc genhtml_function_coverage=1 00:38:49.147 --rc genhtml_legend=1 00:38:49.147 --rc geninfo_all_blocks=1 00:38:49.147 --rc geninfo_unexecuted_blocks=1 00:38:49.147 00:38:49.147 ' 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:38:49.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.147 --rc genhtml_branch_coverage=1 00:38:49.147 --rc genhtml_function_coverage=1 00:38:49.147 --rc genhtml_legend=1 00:38:49.147 --rc geninfo_all_blocks=1 00:38:49.147 --rc geninfo_unexecuted_blocks=1 00:38:49.147 00:38:49.147 ' 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:38:49.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.147 --rc genhtml_branch_coverage=1 00:38:49.147 --rc genhtml_function_coverage=1 00:38:49.147 --rc genhtml_legend=1 00:38:49.147 --rc geninfo_all_blocks=1 00:38:49.147 --rc geninfo_unexecuted_blocks=1 00:38:49.147 00:38:49.147 ' 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:38:49.147 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:38:49.147 --rc genhtml_branch_coverage=1 00:38:49.147 --rc genhtml_function_coverage=1 00:38:49.147 --rc genhtml_legend=1 00:38:49.147 --rc geninfo_all_blocks=1 00:38:49.147 --rc geninfo_unexecuted_blocks=1 00:38:49.147 00:38:49.147 ' 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:38:49.147 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # : 0 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@55 -- # have_pci_nics=0 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@476 -- # prepare_net_devs 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@438 -- # local -g is_hw=no 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@440 -- # remove_spdk_ns 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # xtrace_disable 00:38:49.148 05:39:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # pci_devs=() 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # local -a pci_devs 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # pci_net_devs=() 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # pci_drivers=() 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@317 -- # local -A pci_drivers 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # net_devs=() 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@319 -- # local -ga net_devs 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # e810=() 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@320 -- # local -ga e810 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # x722=() 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # local -ga x722 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # mlx=() 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@322 -- # local -ga mlx 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:38:55.718 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:38:55.719 Found 0000:af:00.0 (0x8086 - 0x159b) 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:38:55.719 Found 0000:af:00.1 (0x8086 - 0x159b) 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:38:55.719 Found net devices under 0000:af:00.0: cvl_0_0 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@418 -- # [[ up == up ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:38:55.719 Found net devices under 0000:af:00.1: cvl_0_1 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@442 -- # is_hw=yes 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:38:55.719 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:38:55.719 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.409 ms 00:38:55.719 00:38:55.719 --- 10.0.0.2 ping statistics --- 00:38:55.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:55.719 rtt min/avg/max/mdev = 0.409/0.409/0.409/0.000 ms 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:38:55.719 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:38:55.719 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:38:55.719 00:38:55.719 --- 10.0.0.1 ping statistics --- 00:38:55.719 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:38:55.719 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@450 -- # return 0 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@509 -- # nvmfpid=575977 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@510 -- # waitforlisten 575977 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 575977 ']' 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:38:55.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:55.719 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:55.719 [2024-12-15 05:39:08.722938] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:38:55.719 [2024-12-15 05:39:08.723918] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:55.720 [2024-12-15 05:39:08.723957] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:38:55.720 [2024-12-15 05:39:08.802493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:55.720 [2024-12-15 05:39:08.824464] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:38:55.720 [2024-12-15 05:39:08.824502] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:38:55.720 [2024-12-15 05:39:08.824509] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:38:55.720 [2024-12-15 05:39:08.824517] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:38:55.720 [2024-12-15 05:39:08.824526] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:38:55.720 [2024-12-15 05:39:08.824987] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:38:55.720 [2024-12-15 05:39:08.888027] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:38:55.720 [2024-12-15 05:39:08.888222] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:38:55.720 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:55.720 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:38:55.720 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:38:55.720 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:38:55.720 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:55.720 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:38:55.720 05:39:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:38:55.720 [2024-12-15 05:39:09.117624] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:38:55.720 ************************************ 00:38:55.720 START TEST lvs_grow_clean 00:38:55.720 ************************************ 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:55.720 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:38:55.982 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:38:55.982 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:38:55.982 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=bd29ec7c-0fa5-4aa1-b8de-cf465d962889 00:38:55.982 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd29ec7c-0fa5-4aa1-b8de-cf465d962889 00:38:55.982 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:38:56.285 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:38:56.285 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:38:56.285 05:39:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u bd29ec7c-0fa5-4aa1-b8de-cf465d962889 lvol 150 00:38:56.589 05:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f14c392a-e1cf-4a56-bb3c-b96bfe697e26 00:38:56.589 05:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:38:56.589 05:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:38:56.589 [2024-12-15 05:39:10.197375] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:38:56.589 [2024-12-15 05:39:10.197503] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:38:56.589 true 00:38:56.589 05:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd29ec7c-0fa5-4aa1-b8de-cf465d962889 00:38:56.589 05:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:38:56.891 05:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:38:56.891 05:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:38:57.221 05:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f14c392a-e1cf-4a56-bb3c-b96bfe697e26 00:38:57.221 05:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:38:57.479 [2024-12-15 05:39:10.969837] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:38:57.480 05:39:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:38:57.738 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:38:57.738 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=576468 00:38:57.738 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:38:57.738 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 576468 /var/tmp/bdevperf.sock 00:38:57.738 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 576468 ']' 00:38:57.738 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:38:57.738 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:38:57.738 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:38:57.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:38:57.738 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:38:57.738 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:38:57.738 [2024-12-15 05:39:11.206705] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:38:57.738 [2024-12-15 05:39:11.206750] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid576468 ] 00:38:57.738 [2024-12-15 05:39:11.281465] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:38:57.738 [2024-12-15 05:39:11.303962] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:38:57.738 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:38:57.738 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:38:57.738 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:38:57.997 Nvme0n1 00:38:57.997 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:38:58.256 [ 00:38:58.256 { 00:38:58.256 "name": "Nvme0n1", 00:38:58.256 "aliases": [ 00:38:58.256 "f14c392a-e1cf-4a56-bb3c-b96bfe697e26" 00:38:58.256 ], 00:38:58.256 "product_name": "NVMe disk", 00:38:58.256 "block_size": 4096, 00:38:58.256 "num_blocks": 38912, 00:38:58.256 "uuid": "f14c392a-e1cf-4a56-bb3c-b96bfe697e26", 00:38:58.256 "numa_id": 1, 00:38:58.256 "assigned_rate_limits": { 00:38:58.256 "rw_ios_per_sec": 0, 00:38:58.256 "rw_mbytes_per_sec": 0, 00:38:58.256 "r_mbytes_per_sec": 0, 00:38:58.256 "w_mbytes_per_sec": 0 00:38:58.256 }, 00:38:58.256 "claimed": false, 00:38:58.256 "zoned": false, 00:38:58.256 "supported_io_types": { 00:38:58.256 "read": true, 00:38:58.256 "write": true, 00:38:58.256 "unmap": true, 00:38:58.256 "flush": true, 00:38:58.256 "reset": true, 00:38:58.256 "nvme_admin": true, 00:38:58.256 "nvme_io": true, 00:38:58.256 "nvme_io_md": false, 00:38:58.256 "write_zeroes": true, 00:38:58.256 "zcopy": false, 00:38:58.256 "get_zone_info": false, 00:38:58.256 "zone_management": false, 00:38:58.256 "zone_append": false, 00:38:58.256 "compare": true, 00:38:58.256 "compare_and_write": true, 00:38:58.256 "abort": true, 00:38:58.256 "seek_hole": false, 00:38:58.256 "seek_data": false, 00:38:58.256 "copy": true, 00:38:58.256 "nvme_iov_md": false 00:38:58.256 }, 00:38:58.256 "memory_domains": [ 00:38:58.256 { 00:38:58.256 "dma_device_id": "system", 00:38:58.256 "dma_device_type": 1 00:38:58.256 } 00:38:58.256 ], 00:38:58.256 "driver_specific": { 00:38:58.256 "nvme": [ 00:38:58.256 { 00:38:58.256 "trid": { 00:38:58.256 "trtype": "TCP", 00:38:58.256 "adrfam": "IPv4", 00:38:58.256 "traddr": "10.0.0.2", 00:38:58.256 "trsvcid": "4420", 00:38:58.256 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:38:58.256 }, 00:38:58.256 "ctrlr_data": { 00:38:58.256 "cntlid": 1, 00:38:58.256 "vendor_id": "0x8086", 00:38:58.256 "model_number": "SPDK bdev Controller", 00:38:58.256 "serial_number": "SPDK0", 00:38:58.257 "firmware_revision": "25.01", 00:38:58.257 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:38:58.257 "oacs": { 00:38:58.257 "security": 0, 00:38:58.257 "format": 0, 00:38:58.257 "firmware": 0, 00:38:58.257 "ns_manage": 0 00:38:58.257 }, 00:38:58.257 "multi_ctrlr": true, 00:38:58.257 "ana_reporting": false 00:38:58.257 }, 00:38:58.257 "vs": { 00:38:58.257 "nvme_version": "1.3" 00:38:58.257 }, 00:38:58.257 "ns_data": { 00:38:58.257 "id": 1, 00:38:58.257 "can_share": true 00:38:58.257 } 00:38:58.257 } 00:38:58.257 ], 00:38:58.257 "mp_policy": "active_passive" 00:38:58.257 } 00:38:58.257 } 00:38:58.257 ] 00:38:58.257 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=576488 00:38:58.257 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:38:58.257 05:39:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:38:58.515 Running I/O for 10 seconds... 00:38:59.452 Latency(us) 00:38:59.452 [2024-12-15T04:39:13.139Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:38:59.452 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:38:59.452 Nvme0n1 : 1.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:38:59.452 [2024-12-15T04:39:13.139Z] =================================================================================================================== 00:38:59.452 [2024-12-15T04:39:13.139Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:38:59.452 00:39:00.389 05:39:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u bd29ec7c-0fa5-4aa1-b8de-cf465d962889 00:39:00.389 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:00.389 Nvme0n1 : 2.00 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:39:00.389 [2024-12-15T04:39:14.076Z] =================================================================================================================== 00:39:00.389 [2024-12-15T04:39:14.076Z] Total : 22987.00 89.79 0.00 0.00 0.00 0.00 0.00 00:39:00.389 00:39:00.389 true 00:39:00.648 05:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd29ec7c-0fa5-4aa1-b8de-cf465d962889 00:39:00.648 05:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:00.648 05:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:00.648 05:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:00.648 05:39:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 576488 00:39:01.594 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:01.594 Nvme0n1 : 3.00 23156.33 90.45 0.00 0.00 0.00 0.00 0.00 00:39:01.594 [2024-12-15T04:39:15.281Z] =================================================================================================================== 00:39:01.594 [2024-12-15T04:39:15.281Z] Total : 23156.33 90.45 0.00 0.00 0.00 0.00 0.00 00:39:01.594 00:39:02.534 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:02.534 Nvme0n1 : 4.00 23272.75 90.91 0.00 0.00 0.00 0.00 0.00 00:39:02.534 [2024-12-15T04:39:16.221Z] =================================================================================================================== 00:39:02.534 [2024-12-15T04:39:16.221Z] Total : 23272.75 90.91 0.00 0.00 0.00 0.00 0.00 00:39:02.534 00:39:03.470 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:03.470 Nvme0n1 : 5.00 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:39:03.470 [2024-12-15T04:39:17.157Z] =================================================================================================================== 00:39:03.470 [2024-12-15T04:39:17.157Z] Total : 23368.00 91.28 0.00 0.00 0.00 0.00 0.00 00:39:03.470 00:39:04.407 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:04.407 Nvme0n1 : 6.00 23413.17 91.46 0.00 0.00 0.00 0.00 0.00 00:39:04.407 [2024-12-15T04:39:18.094Z] =================================================================================================================== 00:39:04.407 [2024-12-15T04:39:18.094Z] Total : 23413.17 91.46 0.00 0.00 0.00 0.00 0.00 00:39:04.407 00:39:05.343 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:05.343 Nvme0n1 : 7.00 23461.14 91.65 0.00 0.00 0.00 0.00 0.00 00:39:05.343 [2024-12-15T04:39:19.030Z] =================================================================================================================== 00:39:05.343 [2024-12-15T04:39:19.030Z] Total : 23461.14 91.65 0.00 0.00 0.00 0.00 0.00 00:39:05.343 00:39:06.721 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:06.721 Nvme0n1 : 8.00 23497.12 91.79 0.00 0.00 0.00 0.00 0.00 00:39:06.721 [2024-12-15T04:39:20.408Z] =================================================================================================================== 00:39:06.721 [2024-12-15T04:39:20.408Z] Total : 23497.12 91.79 0.00 0.00 0.00 0.00 0.00 00:39:06.721 00:39:07.659 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:07.659 Nvme0n1 : 9.00 23482.78 91.73 0.00 0.00 0.00 0.00 0.00 00:39:07.659 [2024-12-15T04:39:21.346Z] =================================================================================================================== 00:39:07.659 [2024-12-15T04:39:21.346Z] Total : 23482.78 91.73 0.00 0.00 0.00 0.00 0.00 00:39:07.659 00:39:08.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:08.596 Nvme0n1 : 10.00 23509.40 91.83 0.00 0.00 0.00 0.00 0.00 00:39:08.596 [2024-12-15T04:39:22.283Z] =================================================================================================================== 00:39:08.596 [2024-12-15T04:39:22.283Z] Total : 23509.40 91.83 0.00 0.00 0.00 0.00 0.00 00:39:08.596 00:39:08.596 00:39:08.596 Latency(us) 00:39:08.596 [2024-12-15T04:39:22.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:08.596 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:08.596 Nvme0n1 : 10.00 23514.37 91.85 0.00 0.00 5440.72 3245.59 26339.23 00:39:08.596 [2024-12-15T04:39:22.283Z] =================================================================================================================== 00:39:08.596 [2024-12-15T04:39:22.283Z] Total : 23514.37 91.85 0.00 0.00 5440.72 3245.59 26339.23 00:39:08.596 { 00:39:08.596 "results": [ 00:39:08.596 { 00:39:08.596 "job": "Nvme0n1", 00:39:08.596 "core_mask": "0x2", 00:39:08.596 "workload": "randwrite", 00:39:08.596 "status": "finished", 00:39:08.596 "queue_depth": 128, 00:39:08.596 "io_size": 4096, 00:39:08.596 "runtime": 10.003331, 00:39:08.596 "iops": 23514.367364230973, 00:39:08.596 "mibps": 91.85299751652724, 00:39:08.596 "io_failed": 0, 00:39:08.596 "io_timeout": 0, 00:39:08.596 "avg_latency_us": 5440.720983678641, 00:39:08.596 "min_latency_us": 3245.592380952381, 00:39:08.596 "max_latency_us": 26339.230476190478 00:39:08.596 } 00:39:08.596 ], 00:39:08.596 "core_count": 1 00:39:08.596 } 00:39:08.596 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 576468 00:39:08.596 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 576468 ']' 00:39:08.596 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 576468 00:39:08.596 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:39:08.596 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:08.596 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 576468 00:39:08.596 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:08.596 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:08.596 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 576468' 00:39:08.596 killing process with pid 576468 00:39:08.596 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 576468 00:39:08.596 Received shutdown signal, test time was about 10.000000 seconds 00:39:08.596 00:39:08.596 Latency(us) 00:39:08.596 [2024-12-15T04:39:22.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:08.596 [2024-12-15T04:39:22.283Z] =================================================================================================================== 00:39:08.596 [2024-12-15T04:39:22.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:08.596 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 576468 00:39:08.596 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:08.855 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:09.114 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd29ec7c-0fa5-4aa1-b8de-cf465d962889 00:39:09.114 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:09.114 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:09.114 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:39:09.114 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:09.373 [2024-12-15 05:39:22.957441] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:09.373 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd29ec7c-0fa5-4aa1-b8de-cf465d962889 00:39:09.373 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:39:09.373 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd29ec7c-0fa5-4aa1-b8de-cf465d962889 00:39:09.373 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:09.374 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:09.374 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:09.374 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:09.374 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:09.374 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:09.374 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:09.374 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:09.374 05:39:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd29ec7c-0fa5-4aa1-b8de-cf465d962889 00:39:09.633 request: 00:39:09.633 { 00:39:09.633 "uuid": "bd29ec7c-0fa5-4aa1-b8de-cf465d962889", 00:39:09.633 "method": "bdev_lvol_get_lvstores", 00:39:09.633 "req_id": 1 00:39:09.633 } 00:39:09.633 Got JSON-RPC error response 00:39:09.633 response: 00:39:09.633 { 00:39:09.633 "code": -19, 00:39:09.633 "message": "No such device" 00:39:09.633 } 00:39:09.633 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:39:09.633 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:09.633 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:09.633 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:09.633 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:09.892 aio_bdev 00:39:09.892 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f14c392a-e1cf-4a56-bb3c-b96bfe697e26 00:39:09.892 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f14c392a-e1cf-4a56-bb3c-b96bfe697e26 00:39:09.892 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:09.892 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:39:09.892 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:09.892 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:09.892 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:09.892 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f14c392a-e1cf-4a56-bb3c-b96bfe697e26 -t 2000 00:39:10.151 [ 00:39:10.151 { 00:39:10.151 "name": "f14c392a-e1cf-4a56-bb3c-b96bfe697e26", 00:39:10.151 "aliases": [ 00:39:10.151 "lvs/lvol" 00:39:10.151 ], 00:39:10.151 "product_name": "Logical Volume", 00:39:10.151 "block_size": 4096, 00:39:10.151 "num_blocks": 38912, 00:39:10.151 "uuid": "f14c392a-e1cf-4a56-bb3c-b96bfe697e26", 00:39:10.151 "assigned_rate_limits": { 00:39:10.151 "rw_ios_per_sec": 0, 00:39:10.151 "rw_mbytes_per_sec": 0, 00:39:10.151 "r_mbytes_per_sec": 0, 00:39:10.151 "w_mbytes_per_sec": 0 00:39:10.151 }, 00:39:10.151 "claimed": false, 00:39:10.151 "zoned": false, 00:39:10.151 "supported_io_types": { 00:39:10.151 "read": true, 00:39:10.151 "write": true, 00:39:10.151 "unmap": true, 00:39:10.151 "flush": false, 00:39:10.151 "reset": true, 00:39:10.151 "nvme_admin": false, 00:39:10.151 "nvme_io": false, 00:39:10.151 "nvme_io_md": false, 00:39:10.151 "write_zeroes": true, 00:39:10.151 "zcopy": false, 00:39:10.151 "get_zone_info": false, 00:39:10.151 "zone_management": false, 00:39:10.151 "zone_append": false, 00:39:10.151 "compare": false, 00:39:10.151 "compare_and_write": false, 00:39:10.151 "abort": false, 00:39:10.151 "seek_hole": true, 00:39:10.151 "seek_data": true, 00:39:10.151 "copy": false, 00:39:10.151 "nvme_iov_md": false 00:39:10.151 }, 00:39:10.151 "driver_specific": { 00:39:10.151 "lvol": { 00:39:10.151 "lvol_store_uuid": "bd29ec7c-0fa5-4aa1-b8de-cf465d962889", 00:39:10.151 "base_bdev": "aio_bdev", 00:39:10.151 "thin_provision": false, 00:39:10.151 "num_allocated_clusters": 38, 00:39:10.151 "snapshot": false, 00:39:10.151 "clone": false, 00:39:10.151 "esnap_clone": false 00:39:10.151 } 00:39:10.151 } 00:39:10.151 } 00:39:10.151 ] 00:39:10.151 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:39:10.151 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd29ec7c-0fa5-4aa1-b8de-cf465d962889 00:39:10.151 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:10.410 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:10.410 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:10.410 05:39:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u bd29ec7c-0fa5-4aa1-b8de-cf465d962889 00:39:10.669 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:10.669 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f14c392a-e1cf-4a56-bb3c-b96bfe697e26 00:39:10.669 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u bd29ec7c-0fa5-4aa1-b8de-cf465d962889 00:39:10.928 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:11.186 00:39:11.186 real 0m15.531s 00:39:11.186 user 0m15.148s 00:39:11.186 sys 0m1.426s 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:39:11.186 ************************************ 00:39:11.186 END TEST lvs_grow_clean 00:39:11.186 ************************************ 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:11.186 ************************************ 00:39:11.186 START TEST lvs_grow_dirty 00:39:11.186 ************************************ 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:39:11.186 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:11.187 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:11.187 05:39:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:11.445 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:39:11.445 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:39:11.704 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:11.704 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:11.704 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:39:11.963 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:39:11.963 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:39:11.963 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 lvol 150 00:39:11.963 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=52d2356c-0694-4156-ab48-41e0ddaaabe5 00:39:11.963 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:11.963 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:39:12.222 [2024-12-15 05:39:25.761375] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:39:12.222 [2024-12-15 05:39:25.761501] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:39:12.222 true 00:39:12.222 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:12.222 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:39:12.481 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:39:12.481 05:39:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:39:12.740 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 52d2356c-0694-4156-ab48-41e0ddaaabe5 00:39:12.740 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:39:12.999 [2024-12-15 05:39:26.549792] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:12.999 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:13.258 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=578979 00:39:13.258 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:13.258 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:39:13.258 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 578979 /var/tmp/bdevperf.sock 00:39:13.258 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 578979 ']' 00:39:13.258 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:13.258 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:13.258 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:13.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:13.258 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:13.258 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:13.258 [2024-12-15 05:39:26.801549] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:13.258 [2024-12-15 05:39:26.801595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid578979 ] 00:39:13.258 [2024-12-15 05:39:26.872816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:13.258 [2024-12-15 05:39:26.894700] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:13.517 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:13.517 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:13.517 05:39:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:39:13.775 Nvme0n1 00:39:13.775 05:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:39:14.034 [ 00:39:14.034 { 00:39:14.034 "name": "Nvme0n1", 00:39:14.034 "aliases": [ 00:39:14.034 "52d2356c-0694-4156-ab48-41e0ddaaabe5" 00:39:14.034 ], 00:39:14.034 "product_name": "NVMe disk", 00:39:14.034 "block_size": 4096, 00:39:14.034 "num_blocks": 38912, 00:39:14.034 "uuid": "52d2356c-0694-4156-ab48-41e0ddaaabe5", 00:39:14.034 "numa_id": 1, 00:39:14.034 "assigned_rate_limits": { 00:39:14.034 "rw_ios_per_sec": 0, 00:39:14.034 "rw_mbytes_per_sec": 0, 00:39:14.034 "r_mbytes_per_sec": 0, 00:39:14.034 "w_mbytes_per_sec": 0 00:39:14.034 }, 00:39:14.034 "claimed": false, 00:39:14.034 "zoned": false, 00:39:14.034 "supported_io_types": { 00:39:14.034 "read": true, 00:39:14.034 "write": true, 00:39:14.034 "unmap": true, 00:39:14.034 "flush": true, 00:39:14.034 "reset": true, 00:39:14.034 "nvme_admin": true, 00:39:14.034 "nvme_io": true, 00:39:14.034 "nvme_io_md": false, 00:39:14.034 "write_zeroes": true, 00:39:14.034 "zcopy": false, 00:39:14.034 "get_zone_info": false, 00:39:14.034 "zone_management": false, 00:39:14.034 "zone_append": false, 00:39:14.034 "compare": true, 00:39:14.034 "compare_and_write": true, 00:39:14.034 "abort": true, 00:39:14.034 "seek_hole": false, 00:39:14.034 "seek_data": false, 00:39:14.034 "copy": true, 00:39:14.034 "nvme_iov_md": false 00:39:14.034 }, 00:39:14.034 "memory_domains": [ 00:39:14.034 { 00:39:14.034 "dma_device_id": "system", 00:39:14.034 "dma_device_type": 1 00:39:14.034 } 00:39:14.034 ], 00:39:14.035 "driver_specific": { 00:39:14.035 "nvme": [ 00:39:14.035 { 00:39:14.035 "trid": { 00:39:14.035 "trtype": "TCP", 00:39:14.035 "adrfam": "IPv4", 00:39:14.035 "traddr": "10.0.0.2", 00:39:14.035 "trsvcid": "4420", 00:39:14.035 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:39:14.035 }, 00:39:14.035 "ctrlr_data": { 00:39:14.035 "cntlid": 1, 00:39:14.035 "vendor_id": "0x8086", 00:39:14.035 "model_number": "SPDK bdev Controller", 00:39:14.035 "serial_number": "SPDK0", 00:39:14.035 "firmware_revision": "25.01", 00:39:14.035 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:39:14.035 "oacs": { 00:39:14.035 "security": 0, 00:39:14.035 "format": 0, 00:39:14.035 "firmware": 0, 00:39:14.035 "ns_manage": 0 00:39:14.035 }, 00:39:14.035 "multi_ctrlr": true, 00:39:14.035 "ana_reporting": false 00:39:14.035 }, 00:39:14.035 "vs": { 00:39:14.035 "nvme_version": "1.3" 00:39:14.035 }, 00:39:14.035 "ns_data": { 00:39:14.035 "id": 1, 00:39:14.035 "can_share": true 00:39:14.035 } 00:39:14.035 } 00:39:14.035 ], 00:39:14.035 "mp_policy": "active_passive" 00:39:14.035 } 00:39:14.035 } 00:39:14.035 ] 00:39:14.035 05:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=578995 00:39:14.035 05:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:14.035 05:39:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:39:14.035 Running I/O for 10 seconds... 00:39:14.971 Latency(us) 00:39:14.971 [2024-12-15T04:39:28.658Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:14.972 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:14.972 Nvme0n1 : 1.00 22797.00 89.05 0.00 0.00 0.00 0.00 0.00 00:39:14.972 [2024-12-15T04:39:28.659Z] =================================================================================================================== 00:39:14.972 [2024-12-15T04:39:28.659Z] Total : 22797.00 89.05 0.00 0.00 0.00 0.00 0.00 00:39:14.972 00:39:15.908 05:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:16.167 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:16.167 Nvme0n1 : 2.00 23138.50 90.38 0.00 0.00 0.00 0.00 0.00 00:39:16.167 [2024-12-15T04:39:29.854Z] =================================================================================================================== 00:39:16.167 [2024-12-15T04:39:29.854Z] Total : 23138.50 90.38 0.00 0.00 0.00 0.00 0.00 00:39:16.167 00:39:16.167 true 00:39:16.167 05:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:16.167 05:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:39:16.425 05:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:39:16.425 05:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:39:16.425 05:39:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 578995 00:39:16.993 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:16.993 Nvme0n1 : 3.00 23088.00 90.19 0.00 0.00 0.00 0.00 0.00 00:39:16.993 [2024-12-15T04:39:30.680Z] =================================================================================================================== 00:39:16.993 [2024-12-15T04:39:30.680Z] Total : 23088.00 90.19 0.00 0.00 0.00 0.00 0.00 00:39:16.993 00:39:18.371 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:18.371 Nvme0n1 : 4.00 23221.50 90.71 0.00 0.00 0.00 0.00 0.00 00:39:18.371 [2024-12-15T04:39:32.058Z] =================================================================================================================== 00:39:18.371 [2024-12-15T04:39:32.058Z] Total : 23221.50 90.71 0.00 0.00 0.00 0.00 0.00 00:39:18.371 00:39:19.306 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:19.306 Nvme0n1 : 5.00 23301.60 91.02 0.00 0.00 0.00 0.00 0.00 00:39:19.306 [2024-12-15T04:39:32.993Z] =================================================================================================================== 00:39:19.306 [2024-12-15T04:39:32.993Z] Total : 23301.60 91.02 0.00 0.00 0.00 0.00 0.00 00:39:19.306 00:39:20.243 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:20.243 Nvme0n1 : 6.00 23376.17 91.31 0.00 0.00 0.00 0.00 0.00 00:39:20.243 [2024-12-15T04:39:33.930Z] =================================================================================================================== 00:39:20.243 [2024-12-15T04:39:33.930Z] Total : 23376.17 91.31 0.00 0.00 0.00 0.00 0.00 00:39:20.243 00:39:21.178 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:21.178 Nvme0n1 : 7.00 23429.43 91.52 0.00 0.00 0.00 0.00 0.00 00:39:21.178 [2024-12-15T04:39:34.865Z] =================================================================================================================== 00:39:21.178 [2024-12-15T04:39:34.865Z] Total : 23429.43 91.52 0.00 0.00 0.00 0.00 0.00 00:39:21.178 00:39:22.113 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:22.113 Nvme0n1 : 8.00 23469.38 91.68 0.00 0.00 0.00 0.00 0.00 00:39:22.113 [2024-12-15T04:39:35.800Z] =================================================================================================================== 00:39:22.113 [2024-12-15T04:39:35.800Z] Total : 23469.38 91.68 0.00 0.00 0.00 0.00 0.00 00:39:22.113 00:39:23.052 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:23.052 Nvme0n1 : 9.00 23500.44 91.80 0.00 0.00 0.00 0.00 0.00 00:39:23.052 [2024-12-15T04:39:36.739Z] =================================================================================================================== 00:39:23.052 [2024-12-15T04:39:36.739Z] Total : 23500.44 91.80 0.00 0.00 0.00 0.00 0.00 00:39:23.052 00:39:24.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:24.427 Nvme0n1 : 10.00 23525.30 91.90 0.00 0.00 0.00 0.00 0.00 00:39:24.427 [2024-12-15T04:39:38.114Z] =================================================================================================================== 00:39:24.427 [2024-12-15T04:39:38.114Z] Total : 23525.30 91.90 0.00 0.00 0.00 0.00 0.00 00:39:24.427 00:39:24.427 00:39:24.427 Latency(us) 00:39:24.427 [2024-12-15T04:39:38.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:24.427 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:39:24.427 Nvme0n1 : 10.01 23524.69 91.89 0.00 0.00 5438.15 3151.97 27962.03 00:39:24.427 [2024-12-15T04:39:38.114Z] =================================================================================================================== 00:39:24.427 [2024-12-15T04:39:38.114Z] Total : 23524.69 91.89 0.00 0.00 5438.15 3151.97 27962.03 00:39:24.427 { 00:39:24.427 "results": [ 00:39:24.427 { 00:39:24.427 "job": "Nvme0n1", 00:39:24.427 "core_mask": "0x2", 00:39:24.427 "workload": "randwrite", 00:39:24.427 "status": "finished", 00:39:24.427 "queue_depth": 128, 00:39:24.427 "io_size": 4096, 00:39:24.427 "runtime": 10.005701, 00:39:24.427 "iops": 23524.68857504337, 00:39:24.427 "mibps": 91.89331474626316, 00:39:24.427 "io_failed": 0, 00:39:24.427 "io_timeout": 0, 00:39:24.427 "avg_latency_us": 5438.145022717981, 00:39:24.427 "min_latency_us": 3151.9695238095237, 00:39:24.427 "max_latency_us": 27962.02666666667 00:39:24.427 } 00:39:24.427 ], 00:39:24.427 "core_count": 1 00:39:24.427 } 00:39:24.427 05:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 578979 00:39:24.427 05:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 578979 ']' 00:39:24.427 05:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 578979 00:39:24.427 05:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:39:24.427 05:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:24.427 05:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 578979 00:39:24.427 05:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:24.427 05:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:24.427 05:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 578979' 00:39:24.427 killing process with pid 578979 00:39:24.427 05:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 578979 00:39:24.427 Received shutdown signal, test time was about 10.000000 seconds 00:39:24.427 00:39:24.427 Latency(us) 00:39:24.427 [2024-12-15T04:39:38.114Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:24.427 [2024-12-15T04:39:38.114Z] =================================================================================================================== 00:39:24.427 [2024-12-15T04:39:38.114Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:24.427 05:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 578979 00:39:24.427 05:39:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:39:24.427 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:39:24.686 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:24.686 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 575977 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 575977 00:39:24.945 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 575977 Killed "${NVMF_APP[@]}" "$@" 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@509 -- # nvmfpid=580775 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@510 -- # waitforlisten 580775 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 580775 ']' 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:24.945 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:24.945 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:24.945 [2024-12-15 05:39:38.558614] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:24.945 [2024-12-15 05:39:38.559527] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:24.945 [2024-12-15 05:39:38.559564] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:25.204 [2024-12-15 05:39:38.634050] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:25.204 [2024-12-15 05:39:38.655102] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:25.204 [2024-12-15 05:39:38.655139] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:25.204 [2024-12-15 05:39:38.655146] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:25.204 [2024-12-15 05:39:38.655153] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:25.204 [2024-12-15 05:39:38.655158] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:25.204 [2024-12-15 05:39:38.655607] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:25.204 [2024-12-15 05:39:38.718464] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:25.204 [2024-12-15 05:39:38.718655] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:25.204 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:25.204 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:39:25.204 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:25.204 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:25.204 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:25.204 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:25.204 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:25.463 [2024-12-15 05:39:38.957013] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:39:25.463 [2024-12-15 05:39:38.957215] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:39:25.463 [2024-12-15 05:39:38.957300] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:39:25.463 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:39:25.463 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 52d2356c-0694-4156-ab48-41e0ddaaabe5 00:39:25.463 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=52d2356c-0694-4156-ab48-41e0ddaaabe5 00:39:25.463 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:25.463 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:25.463 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:25.463 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:25.463 05:39:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:25.723 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 52d2356c-0694-4156-ab48-41e0ddaaabe5 -t 2000 00:39:25.723 [ 00:39:25.723 { 00:39:25.723 "name": "52d2356c-0694-4156-ab48-41e0ddaaabe5", 00:39:25.723 "aliases": [ 00:39:25.723 "lvs/lvol" 00:39:25.723 ], 00:39:25.723 "product_name": "Logical Volume", 00:39:25.723 "block_size": 4096, 00:39:25.723 "num_blocks": 38912, 00:39:25.723 "uuid": "52d2356c-0694-4156-ab48-41e0ddaaabe5", 00:39:25.723 "assigned_rate_limits": { 00:39:25.723 "rw_ios_per_sec": 0, 00:39:25.723 "rw_mbytes_per_sec": 0, 00:39:25.723 "r_mbytes_per_sec": 0, 00:39:25.723 "w_mbytes_per_sec": 0 00:39:25.723 }, 00:39:25.723 "claimed": false, 00:39:25.723 "zoned": false, 00:39:25.723 "supported_io_types": { 00:39:25.723 "read": true, 00:39:25.723 "write": true, 00:39:25.723 "unmap": true, 00:39:25.723 "flush": false, 00:39:25.723 "reset": true, 00:39:25.723 "nvme_admin": false, 00:39:25.723 "nvme_io": false, 00:39:25.723 "nvme_io_md": false, 00:39:25.723 "write_zeroes": true, 00:39:25.723 "zcopy": false, 00:39:25.723 "get_zone_info": false, 00:39:25.723 "zone_management": false, 00:39:25.723 "zone_append": false, 00:39:25.723 "compare": false, 00:39:25.723 "compare_and_write": false, 00:39:25.723 "abort": false, 00:39:25.723 "seek_hole": true, 00:39:25.723 "seek_data": true, 00:39:25.723 "copy": false, 00:39:25.723 "nvme_iov_md": false 00:39:25.723 }, 00:39:25.723 "driver_specific": { 00:39:25.723 "lvol": { 00:39:25.723 "lvol_store_uuid": "32162861-8df3-49d1-866f-bf3d5d9f6ec2", 00:39:25.723 "base_bdev": "aio_bdev", 00:39:25.723 "thin_provision": false, 00:39:25.723 "num_allocated_clusters": 38, 00:39:25.723 "snapshot": false, 00:39:25.723 "clone": false, 00:39:25.723 "esnap_clone": false 00:39:25.723 } 00:39:25.723 } 00:39:25.723 } 00:39:25.723 ] 00:39:25.723 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:25.723 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:25.723 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:39:25.981 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:39:25.982 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:25.982 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:39:26.240 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:39:26.240 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:26.499 [2024-12-15 05:39:39.932079] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:39:26.499 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:26.499 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:39:26.499 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:26.499 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:26.499 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.499 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:26.499 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.499 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:26.499 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:39:26.499 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:39:26.499 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:39:26.499 05:39:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:26.499 request: 00:39:26.499 { 00:39:26.499 "uuid": "32162861-8df3-49d1-866f-bf3d5d9f6ec2", 00:39:26.499 "method": "bdev_lvol_get_lvstores", 00:39:26.499 "req_id": 1 00:39:26.499 } 00:39:26.499 Got JSON-RPC error response 00:39:26.499 response: 00:39:26.499 { 00:39:26.499 "code": -19, 00:39:26.499 "message": "No such device" 00:39:26.499 } 00:39:26.499 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:39:26.499 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:39:26.499 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:39:26.499 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:39:26.499 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:39:26.758 aio_bdev 00:39:26.758 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 52d2356c-0694-4156-ab48-41e0ddaaabe5 00:39:26.758 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=52d2356c-0694-4156-ab48-41e0ddaaabe5 00:39:26.758 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:39:26.758 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:39:26.758 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:39:26.758 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:39:26.758 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:39:27.017 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 52d2356c-0694-4156-ab48-41e0ddaaabe5 -t 2000 00:39:27.276 [ 00:39:27.276 { 00:39:27.276 "name": "52d2356c-0694-4156-ab48-41e0ddaaabe5", 00:39:27.276 "aliases": [ 00:39:27.276 "lvs/lvol" 00:39:27.276 ], 00:39:27.276 "product_name": "Logical Volume", 00:39:27.276 "block_size": 4096, 00:39:27.276 "num_blocks": 38912, 00:39:27.276 "uuid": "52d2356c-0694-4156-ab48-41e0ddaaabe5", 00:39:27.276 "assigned_rate_limits": { 00:39:27.276 "rw_ios_per_sec": 0, 00:39:27.276 "rw_mbytes_per_sec": 0, 00:39:27.276 "r_mbytes_per_sec": 0, 00:39:27.276 "w_mbytes_per_sec": 0 00:39:27.276 }, 00:39:27.276 "claimed": false, 00:39:27.276 "zoned": false, 00:39:27.276 "supported_io_types": { 00:39:27.276 "read": true, 00:39:27.276 "write": true, 00:39:27.276 "unmap": true, 00:39:27.277 "flush": false, 00:39:27.277 "reset": true, 00:39:27.277 "nvme_admin": false, 00:39:27.277 "nvme_io": false, 00:39:27.277 "nvme_io_md": false, 00:39:27.277 "write_zeroes": true, 00:39:27.277 "zcopy": false, 00:39:27.277 "get_zone_info": false, 00:39:27.277 "zone_management": false, 00:39:27.277 "zone_append": false, 00:39:27.277 "compare": false, 00:39:27.277 "compare_and_write": false, 00:39:27.277 "abort": false, 00:39:27.277 "seek_hole": true, 00:39:27.277 "seek_data": true, 00:39:27.277 "copy": false, 00:39:27.277 "nvme_iov_md": false 00:39:27.277 }, 00:39:27.277 "driver_specific": { 00:39:27.277 "lvol": { 00:39:27.277 "lvol_store_uuid": "32162861-8df3-49d1-866f-bf3d5d9f6ec2", 00:39:27.277 "base_bdev": "aio_bdev", 00:39:27.277 "thin_provision": false, 00:39:27.277 "num_allocated_clusters": 38, 00:39:27.277 "snapshot": false, 00:39:27.277 "clone": false, 00:39:27.277 "esnap_clone": false 00:39:27.277 } 00:39:27.277 } 00:39:27.277 } 00:39:27.277 ] 00:39:27.277 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:39:27.277 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:27.277 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:39:27.277 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:39:27.277 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:27.277 05:39:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:39:27.536 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:39:27.536 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 52d2356c-0694-4156-ab48-41e0ddaaabe5 00:39:27.795 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 32162861-8df3-49d1-866f-bf3d5d9f6ec2 00:39:28.054 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:39:28.054 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:39:28.313 00:39:28.313 real 0m16.973s 00:39:28.313 user 0m34.353s 00:39:28.313 sys 0m3.825s 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:39:28.313 ************************************ 00:39:28.313 END TEST lvs_grow_dirty 00:39:28.313 ************************************ 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:39:28.313 nvmf_trace.0 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@121 -- # sync 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set +e 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:28.313 rmmod nvme_tcp 00:39:28.313 rmmod nvme_fabrics 00:39:28.313 rmmod nvme_keyring 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@128 -- # set -e 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # return 0 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@517 -- # '[' -n 580775 ']' 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@518 -- # killprocess 580775 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 580775 ']' 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 580775 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 580775 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 580775' 00:39:28.313 killing process with pid 580775 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 580775 00:39:28.313 05:39:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 580775 00:39:28.572 05:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:28.572 05:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:28.572 05:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:28.572 05:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@297 -- # iptr 00:39:28.572 05:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-save 00:39:28.572 05:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:28.572 05:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@791 -- # iptables-restore 00:39:28.572 05:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:28.572 05:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:28.572 05:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:28.572 05:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:28.572 05:39:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:31.108 00:39:31.108 real 0m41.634s 00:39:31.108 user 0m51.941s 00:39:31.108 sys 0m10.139s 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:39:31.108 ************************************ 00:39:31.108 END TEST nvmf_lvs_grow 00:39:31.108 ************************************ 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:31.108 ************************************ 00:39:31.108 START TEST nvmf_bdev_io_wait 00:39:31.108 ************************************ 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:39:31.108 * Looking for test storage... 00:39:31.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lcov --version 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:31.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.108 --rc genhtml_branch_coverage=1 00:39:31.108 --rc genhtml_function_coverage=1 00:39:31.108 --rc genhtml_legend=1 00:39:31.108 --rc geninfo_all_blocks=1 00:39:31.108 --rc geninfo_unexecuted_blocks=1 00:39:31.108 00:39:31.108 ' 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:31.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.108 --rc genhtml_branch_coverage=1 00:39:31.108 --rc genhtml_function_coverage=1 00:39:31.108 --rc genhtml_legend=1 00:39:31.108 --rc geninfo_all_blocks=1 00:39:31.108 --rc geninfo_unexecuted_blocks=1 00:39:31.108 00:39:31.108 ' 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:31.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.108 --rc genhtml_branch_coverage=1 00:39:31.108 --rc genhtml_function_coverage=1 00:39:31.108 --rc genhtml_legend=1 00:39:31.108 --rc geninfo_all_blocks=1 00:39:31.108 --rc geninfo_unexecuted_blocks=1 00:39:31.108 00:39:31.108 ' 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:31.108 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:31.108 --rc genhtml_branch_coverage=1 00:39:31.108 --rc genhtml_function_coverage=1 00:39:31.108 --rc genhtml_legend=1 00:39:31.108 --rc geninfo_all_blocks=1 00:39:31.108 --rc geninfo_unexecuted_blocks=1 00:39:31.108 00:39:31.108 ' 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:39:31.108 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # : 0 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # xtrace_disable 00:39:31.109 05:39:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # pci_devs=() 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # net_devs=() 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # e810=() 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # local -ga e810 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # x722=() 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # local -ga x722 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # mlx=() 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@322 -- # local -ga mlx 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:36.383 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:36.384 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:36.384 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:36.384 Found net devices under 0000:af:00.0: cvl_0_0 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:36.384 Found net devices under 0000:af:00.1: cvl_0_1 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@442 -- # is_hw=yes 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:36.384 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:36.644 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:36.644 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.376 ms 00:39:36.644 00:39:36.644 --- 10.0.0.2 ping statistics --- 00:39:36.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:36.644 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:36.644 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:36.644 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.189 ms 00:39:36.644 00:39:36.644 --- 10.0.0.1 ping statistics --- 00:39:36.644 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:36.644 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # return 0 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:36.644 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@509 -- # nvmfpid=584744 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@510 -- # waitforlisten 584744 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 584744 ']' 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:36.903 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:36.903 [2024-12-15 05:39:50.389163] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:36.903 [2024-12-15 05:39:50.390054] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:36.903 [2024-12-15 05:39:50.390086] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:36.903 [2024-12-15 05:39:50.466268] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:39:36.903 [2024-12-15 05:39:50.489751] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:36.903 [2024-12-15 05:39:50.489790] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:36.903 [2024-12-15 05:39:50.489797] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:36.903 [2024-12-15 05:39:50.489804] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:36.903 [2024-12-15 05:39:50.489808] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:36.903 [2024-12-15 05:39:50.491144] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:36.903 [2024-12-15 05:39:50.491251] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:39:36.903 [2024-12-15 05:39:50.491361] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:36.903 [2024-12-15 05:39:50.491362] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:39:36.903 [2024-12-15 05:39:50.491617] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:36.903 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:36.904 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:36.904 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:39:36.904 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:36.904 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:37.163 [2024-12-15 05:39:50.656742] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:37.163 [2024-12-15 05:39:50.657379] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:39:37.163 [2024-12-15 05:39:50.657772] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:39:37.163 [2024-12-15 05:39:50.657873] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:37.163 [2024-12-15 05:39:50.668000] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:37.163 Malloc0 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:37.163 [2024-12-15 05:39:50.744266] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=584811 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=584814 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:37.163 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:37.163 { 00:39:37.163 "params": { 00:39:37.164 "name": "Nvme$subsystem", 00:39:37.164 "trtype": "$TEST_TRANSPORT", 00:39:37.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:37.164 "adrfam": "ipv4", 00:39:37.164 "trsvcid": "$NVMF_PORT", 00:39:37.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:37.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:37.164 "hdgst": ${hdgst:-false}, 00:39:37.164 "ddgst": ${ddgst:-false} 00:39:37.164 }, 00:39:37.164 "method": "bdev_nvme_attach_controller" 00:39:37.164 } 00:39:37.164 EOF 00:39:37.164 )") 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=584816 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:37.164 { 00:39:37.164 "params": { 00:39:37.164 "name": "Nvme$subsystem", 00:39:37.164 "trtype": "$TEST_TRANSPORT", 00:39:37.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:37.164 "adrfam": "ipv4", 00:39:37.164 "trsvcid": "$NVMF_PORT", 00:39:37.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:37.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:37.164 "hdgst": ${hdgst:-false}, 00:39:37.164 "ddgst": ${ddgst:-false} 00:39:37.164 }, 00:39:37.164 "method": "bdev_nvme_attach_controller" 00:39:37.164 } 00:39:37.164 EOF 00:39:37.164 )") 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=584820 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:37.164 { 00:39:37.164 "params": { 00:39:37.164 "name": "Nvme$subsystem", 00:39:37.164 "trtype": "$TEST_TRANSPORT", 00:39:37.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:37.164 "adrfam": "ipv4", 00:39:37.164 "trsvcid": "$NVMF_PORT", 00:39:37.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:37.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:37.164 "hdgst": ${hdgst:-false}, 00:39:37.164 "ddgst": ${ddgst:-false} 00:39:37.164 }, 00:39:37.164 "method": "bdev_nvme_attach_controller" 00:39:37.164 } 00:39:37.164 EOF 00:39:37.164 )") 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # config=() 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@560 -- # local subsystem config 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:39:37.164 { 00:39:37.164 "params": { 00:39:37.164 "name": "Nvme$subsystem", 00:39:37.164 "trtype": "$TEST_TRANSPORT", 00:39:37.164 "traddr": "$NVMF_FIRST_TARGET_IP", 00:39:37.164 "adrfam": "ipv4", 00:39:37.164 "trsvcid": "$NVMF_PORT", 00:39:37.164 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:39:37.164 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:39:37.164 "hdgst": ${hdgst:-false}, 00:39:37.164 "ddgst": ${ddgst:-false} 00:39:37.164 }, 00:39:37.164 "method": "bdev_nvme_attach_controller" 00:39:37.164 } 00:39:37.164 EOF 00:39:37.164 )") 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 584811 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@582 -- # cat 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:37.164 "params": { 00:39:37.164 "name": "Nvme1", 00:39:37.164 "trtype": "tcp", 00:39:37.164 "traddr": "10.0.0.2", 00:39:37.164 "adrfam": "ipv4", 00:39:37.164 "trsvcid": "4420", 00:39:37.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:37.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:37.164 "hdgst": false, 00:39:37.164 "ddgst": false 00:39:37.164 }, 00:39:37.164 "method": "bdev_nvme_attach_controller" 00:39:37.164 }' 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@584 -- # jq . 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:37.164 "params": { 00:39:37.164 "name": "Nvme1", 00:39:37.164 "trtype": "tcp", 00:39:37.164 "traddr": "10.0.0.2", 00:39:37.164 "adrfam": "ipv4", 00:39:37.164 "trsvcid": "4420", 00:39:37.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:37.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:37.164 "hdgst": false, 00:39:37.164 "ddgst": false 00:39:37.164 }, 00:39:37.164 "method": "bdev_nvme_attach_controller" 00:39:37.164 }' 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:37.164 "params": { 00:39:37.164 "name": "Nvme1", 00:39:37.164 "trtype": "tcp", 00:39:37.164 "traddr": "10.0.0.2", 00:39:37.164 "adrfam": "ipv4", 00:39:37.164 "trsvcid": "4420", 00:39:37.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:37.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:37.164 "hdgst": false, 00:39:37.164 "ddgst": false 00:39:37.164 }, 00:39:37.164 "method": "bdev_nvme_attach_controller" 00:39:37.164 }' 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@585 -- # IFS=, 00:39:37.164 05:39:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:39:37.164 "params": { 00:39:37.164 "name": "Nvme1", 00:39:37.164 "trtype": "tcp", 00:39:37.164 "traddr": "10.0.0.2", 00:39:37.164 "adrfam": "ipv4", 00:39:37.164 "trsvcid": "4420", 00:39:37.164 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:39:37.164 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:39:37.164 "hdgst": false, 00:39:37.164 "ddgst": false 00:39:37.164 }, 00:39:37.164 "method": "bdev_nvme_attach_controller" 00:39:37.164 }' 00:39:37.164 [2024-12-15 05:39:50.797160] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:37.164 [2024-12-15 05:39:50.797215] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:39:37.164 [2024-12-15 05:39:50.797271] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:37.164 [2024-12-15 05:39:50.797314] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:39:37.164 [2024-12-15 05:39:50.798346] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:37.164 [2024-12-15 05:39:50.798392] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:39:37.164 [2024-12-15 05:39:50.798681] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:37.164 [2024-12-15 05:39:50.798724] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:39:37.423 [2024-12-15 05:39:51.001592] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.423 [2024-12-15 05:39:51.022663] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:39:37.423 [2024-12-15 05:39:51.056276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.423 [2024-12-15 05:39:51.075626] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:39:37.682 [2024-12-15 05:39:51.114107] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.682 [2024-12-15 05:39:51.129322] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 7 00:39:37.682 [2024-12-15 05:39:51.207250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:37.682 [2024-12-15 05:39:51.230489] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:39:37.682 Running I/O for 1 seconds... 00:39:37.940 Running I/O for 1 seconds... 00:39:37.940 Running I/O for 1 seconds... 00:39:37.940 Running I/O for 1 seconds... 00:39:38.876 14646.00 IOPS, 57.21 MiB/s 00:39:38.876 Latency(us) 00:39:38.877 [2024-12-15T04:39:52.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:38.877 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:39:38.877 Nvme1n1 : 1.01 14689.43 57.38 0.00 0.00 8686.53 3448.44 10423.34 00:39:38.877 [2024-12-15T04:39:52.564Z] =================================================================================================================== 00:39:38.877 [2024-12-15T04:39:52.564Z] Total : 14689.43 57.38 0.00 0.00 8686.53 3448.44 10423.34 00:39:38.877 6934.00 IOPS, 27.09 MiB/s 00:39:38.877 Latency(us) 00:39:38.877 [2024-12-15T04:39:52.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:38.877 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:39:38.877 Nvme1n1 : 1.01 6971.70 27.23 0.00 0.00 18179.18 4337.86 31582.11 00:39:38.877 [2024-12-15T04:39:52.564Z] =================================================================================================================== 00:39:38.877 [2024-12-15T04:39:52.564Z] Total : 6971.70 27.23 0.00 0.00 18179.18 4337.86 31582.11 00:39:38.877 242392.00 IOPS, 946.84 MiB/s 00:39:38.877 Latency(us) 00:39:38.877 [2024-12-15T04:39:52.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:38.877 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:39:38.877 Nvme1n1 : 1.00 242024.86 945.41 0.00 0.00 526.29 222.35 1497.97 00:39:38.877 [2024-12-15T04:39:52.564Z] =================================================================================================================== 00:39:38.877 [2024-12-15T04:39:52.564Z] Total : 242024.86 945.41 0.00 0.00 526.29 222.35 1497.97 00:39:38.877 7143.00 IOPS, 27.90 MiB/s 00:39:38.877 Latency(us) 00:39:38.877 [2024-12-15T04:39:52.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:38.877 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:39:38.877 Nvme1n1 : 1.01 7249.91 28.32 0.00 0.00 17613.53 3386.03 35701.52 00:39:38.877 [2024-12-15T04:39:52.564Z] =================================================================================================================== 00:39:38.877 [2024-12-15T04:39:52.564Z] Total : 7249.91 28.32 0.00 0.00 17613.53 3386.03 35701.52 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 584814 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 584816 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 584820 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # sync 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set +e 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:39.136 rmmod nvme_tcp 00:39:39.136 rmmod nvme_fabrics 00:39:39.136 rmmod nvme_keyring 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@128 -- # set -e 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # return 0 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@517 -- # '[' -n 584744 ']' 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@518 -- # killprocess 584744 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 584744 ']' 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 584744 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 584744 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 584744' 00:39:39.136 killing process with pid 584744 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 584744 00:39:39.136 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 584744 00:39:39.396 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:39.396 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:39.396 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:39.396 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # iptr 00:39:39.396 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-save 00:39:39.396 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:39.396 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@791 -- # iptables-restore 00:39:39.396 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:39.396 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:39.396 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:39.396 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:39.396 05:39:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:41.301 05:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:39:41.564 00:39:41.564 real 0m10.720s 00:39:41.564 user 0m15.064s 00:39:41.564 sys 0m6.440s 00:39:41.564 05:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:39:41.564 05:39:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:39:41.564 ************************************ 00:39:41.564 END TEST nvmf_bdev_io_wait 00:39:41.564 ************************************ 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:39:41.564 ************************************ 00:39:41.564 START TEST nvmf_queue_depth 00:39:41.564 ************************************ 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:39:41.564 * Looking for test storage... 00:39:41.564 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lcov --version 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:39:41.564 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:39:41.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.565 --rc genhtml_branch_coverage=1 00:39:41.565 --rc genhtml_function_coverage=1 00:39:41.565 --rc genhtml_legend=1 00:39:41.565 --rc geninfo_all_blocks=1 00:39:41.565 --rc geninfo_unexecuted_blocks=1 00:39:41.565 00:39:41.565 ' 00:39:41.565 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:39:41.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.565 --rc genhtml_branch_coverage=1 00:39:41.565 --rc genhtml_function_coverage=1 00:39:41.565 --rc genhtml_legend=1 00:39:41.565 --rc geninfo_all_blocks=1 00:39:41.566 --rc geninfo_unexecuted_blocks=1 00:39:41.566 00:39:41.566 ' 00:39:41.566 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:39:41.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.566 --rc genhtml_branch_coverage=1 00:39:41.566 --rc genhtml_function_coverage=1 00:39:41.566 --rc genhtml_legend=1 00:39:41.566 --rc geninfo_all_blocks=1 00:39:41.566 --rc geninfo_unexecuted_blocks=1 00:39:41.566 00:39:41.566 ' 00:39:41.566 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:39:41.566 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:39:41.566 --rc genhtml_branch_coverage=1 00:39:41.566 --rc genhtml_function_coverage=1 00:39:41.566 --rc genhtml_legend=1 00:39:41.566 --rc geninfo_all_blocks=1 00:39:41.566 --rc geninfo_unexecuted_blocks=1 00:39:41.566 00:39:41.566 ' 00:39:41.566 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:39:41.566 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:39:41.566 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:39:41.566 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:39:41.567 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:39:41.567 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:39:41.567 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:39:41.567 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:39:41.567 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:39:41.567 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:39:41.567 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:39:41.567 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # : 0 00:39:41.828 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@55 -- # have_pci_nics=0 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@476 -- # prepare_net_devs 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@438 -- # local -g is_hw=no 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@440 -- # remove_spdk_ns 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # xtrace_disable 00:39:41.829 05:39:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # pci_devs=() 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # local -a pci_devs 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # pci_net_devs=() 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # pci_drivers=() 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@317 -- # local -A pci_drivers 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # net_devs=() 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@319 -- # local -ga net_devs 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # e810=() 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@320 -- # local -ga e810 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # x722=() 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # local -ga x722 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # mlx=() 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@322 -- # local -ga mlx 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:39:48.400 Found 0000:af:00.0 (0x8086 - 0x159b) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:39:48.400 Found 0000:af:00.1 (0x8086 - 0x159b) 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:39:48.400 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:39:48.401 Found net devices under 0000:af:00.0: cvl_0_0 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@418 -- # [[ up == up ]] 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:39:48.401 Found net devices under 0000:af:00.1: cvl_0_1 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@442 -- # is_hw=yes 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:39:48.401 05:40:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:39:48.401 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:39:48.401 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.311 ms 00:39:48.401 00:39:48.401 --- 10.0.0.2 ping statistics --- 00:39:48.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:48.401 rtt min/avg/max/mdev = 0.311/0.311/0.311/0.000 ms 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:39:48.401 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:39:48.401 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.239 ms 00:39:48.401 00:39:48.401 --- 10.0.0.1 ping statistics --- 00:39:48.401 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:39:48.401 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@450 -- # return 0 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@509 -- # nvmfpid=588656 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@510 -- # waitforlisten 588656 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 588656 ']' 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:39:48.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.401 [2024-12-15 05:40:01.236221] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:39:48.401 [2024-12-15 05:40:01.237211] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:48.401 [2024-12-15 05:40:01.237252] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:39:48.401 [2024-12-15 05:40:01.318621] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.401 [2024-12-15 05:40:01.339256] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:39:48.401 [2024-12-15 05:40:01.339292] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:39:48.401 [2024-12-15 05:40:01.339300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:39:48.401 [2024-12-15 05:40:01.339307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:39:48.401 [2024-12-15 05:40:01.339312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:39:48.401 [2024-12-15 05:40:01.339747] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:39:48.401 [2024-12-15 05:40:01.401060] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:39:48.401 [2024-12-15 05:40:01.401258] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:39:48.401 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.402 [2024-12-15 05:40:01.476466] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.402 Malloc0 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.402 [2024-12-15 05:40:01.548527] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=588714 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 588714 /var/tmp/bdevperf.sock 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 588714 ']' 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:39:48.402 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.402 [2024-12-15 05:40:01.600004] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:39:48.402 [2024-12-15 05:40:01.600045] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid588714 ] 00:39:48.402 [2024-12-15 05:40:01.674178] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:39:48.402 [2024-12-15 05:40:01.696785] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:39:48.402 NVMe0n1 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:39:48.402 05:40:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:39:48.402 Running I/O for 10 seconds... 00:39:50.717 12288.00 IOPS, 48.00 MiB/s [2024-12-15T04:40:05.340Z] 12293.00 IOPS, 48.02 MiB/s [2024-12-15T04:40:06.277Z] 12500.67 IOPS, 48.83 MiB/s [2024-12-15T04:40:07.214Z] 12537.75 IOPS, 48.98 MiB/s [2024-12-15T04:40:08.151Z] 12585.20 IOPS, 49.16 MiB/s [2024-12-15T04:40:09.088Z] 12623.50 IOPS, 49.31 MiB/s [2024-12-15T04:40:10.465Z] 12638.71 IOPS, 49.37 MiB/s [2024-12-15T04:40:11.403Z] 12664.88 IOPS, 49.47 MiB/s [2024-12-15T04:40:12.339Z] 12656.11 IOPS, 49.44 MiB/s [2024-12-15T04:40:12.339Z] 12701.80 IOPS, 49.62 MiB/s 00:39:58.652 Latency(us) 00:39:58.652 [2024-12-15T04:40:12.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:58.652 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:39:58.652 Verification LBA range: start 0x0 length 0x4000 00:39:58.652 NVMe0n1 : 10.05 12732.14 49.73 0.00 0.00 80164.01 13731.35 54426.09 00:39:58.652 [2024-12-15T04:40:12.339Z] =================================================================================================================== 00:39:58.652 [2024-12-15T04:40:12.339Z] Total : 12732.14 49.73 0.00 0.00 80164.01 13731.35 54426.09 00:39:58.652 { 00:39:58.652 "results": [ 00:39:58.652 { 00:39:58.652 "job": "NVMe0n1", 00:39:58.652 "core_mask": "0x1", 00:39:58.652 "workload": "verify", 00:39:58.652 "status": "finished", 00:39:58.652 "verify_range": { 00:39:58.652 "start": 0, 00:39:58.652 "length": 16384 00:39:58.652 }, 00:39:58.652 "queue_depth": 1024, 00:39:58.652 "io_size": 4096, 00:39:58.652 "runtime": 10.051568, 00:39:58.652 "iops": 12732.142885567704, 00:39:58.652 "mibps": 49.734933146748844, 00:39:58.652 "io_failed": 0, 00:39:58.652 "io_timeout": 0, 00:39:58.652 "avg_latency_us": 80164.01264300634, 00:39:58.652 "min_latency_us": 13731.352380952381, 00:39:58.652 "max_latency_us": 54426.08761904762 00:39:58.652 } 00:39:58.652 ], 00:39:58.652 "core_count": 1 00:39:58.652 } 00:39:58.652 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 588714 00:39:58.652 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 588714 ']' 00:39:58.652 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 588714 00:39:58.652 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:58.652 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:58.652 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588714 00:39:58.652 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:39:58.652 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:39:58.652 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588714' 00:39:58.652 killing process with pid 588714 00:39:58.652 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 588714 00:39:58.652 Received shutdown signal, test time was about 10.000000 seconds 00:39:58.652 00:39:58.652 Latency(us) 00:39:58.652 [2024-12-15T04:40:12.339Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:39:58.652 [2024-12-15T04:40:12.339Z] =================================================================================================================== 00:39:58.652 [2024-12-15T04:40:12.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:39:58.652 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 588714 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@516 -- # nvmfcleanup 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@121 -- # sync 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@124 -- # set +e 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # for i in {1..20} 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:39:58.954 rmmod nvme_tcp 00:39:58.954 rmmod nvme_fabrics 00:39:58.954 rmmod nvme_keyring 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@128 -- # set -e 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # return 0 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@517 -- # '[' -n 588656 ']' 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@518 -- # killprocess 588656 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 588656 ']' 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 588656 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 588656 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:39:58.954 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 588656' 00:39:58.954 killing process with pid 588656 00:39:58.955 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 588656 00:39:58.955 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 588656 00:39:59.258 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:39:59.258 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:39:59.258 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:39:59.258 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@297 -- # iptr 00:39:59.258 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-save 00:39:59.258 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:39:59.258 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@791 -- # iptables-restore 00:39:59.258 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:39:59.258 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@302 -- # remove_spdk_ns 00:39:59.258 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:39:59.258 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:39:59.258 05:40:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:01.265 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:01.265 00:40:01.265 real 0m19.672s 00:40:01.265 user 0m22.763s 00:40:01.265 sys 0m6.156s 00:40:01.265 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:01.265 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:40:01.265 ************************************ 00:40:01.265 END TEST nvmf_queue_depth 00:40:01.265 ************************************ 00:40:01.265 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:01.265 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:01.265 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:01.265 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:01.265 ************************************ 00:40:01.265 START TEST nvmf_target_multipath 00:40:01.265 ************************************ 00:40:01.265 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp --interrupt-mode 00:40:01.265 * Looking for test storage... 00:40:01.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:01.265 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:01.265 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lcov --version 00:40:01.265 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # IFS=.-: 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@336 -- # read -ra ver1 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # IFS=.-: 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@337 -- # read -ra ver2 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@338 -- # local 'op=<' 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@340 -- # ver1_l=2 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@341 -- # ver2_l=1 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@344 -- # case "$op" in 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@345 -- # : 1 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # decimal 1 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=1 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 1 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@365 -- # ver1[v]=1 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # decimal 2 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@353 -- # local d=2 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@355 -- # echo 2 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@366 -- # ver2[v]=2 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@368 -- # return 0 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:01.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.525 --rc genhtml_branch_coverage=1 00:40:01.525 --rc genhtml_function_coverage=1 00:40:01.525 --rc genhtml_legend=1 00:40:01.525 --rc geninfo_all_blocks=1 00:40:01.525 --rc geninfo_unexecuted_blocks=1 00:40:01.525 00:40:01.525 ' 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:01.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.525 --rc genhtml_branch_coverage=1 00:40:01.525 --rc genhtml_function_coverage=1 00:40:01.525 --rc genhtml_legend=1 00:40:01.525 --rc geninfo_all_blocks=1 00:40:01.525 --rc geninfo_unexecuted_blocks=1 00:40:01.525 00:40:01.525 ' 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:01.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.525 --rc genhtml_branch_coverage=1 00:40:01.525 --rc genhtml_function_coverage=1 00:40:01.525 --rc genhtml_legend=1 00:40:01.525 --rc geninfo_all_blocks=1 00:40:01.525 --rc geninfo_unexecuted_blocks=1 00:40:01.525 00:40:01.525 ' 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:01.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:01.525 --rc genhtml_branch_coverage=1 00:40:01.525 --rc genhtml_function_coverage=1 00:40:01.525 --rc genhtml_legend=1 00:40:01.525 --rc geninfo_all_blocks=1 00:40:01.525 --rc geninfo_unexecuted_blocks=1 00:40:01.525 00:40:01.525 ' 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:01.525 05:40:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@15 -- # shopt -s extglob 00:40:01.525 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:01.525 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:01.525 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:01.525 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@51 -- # : 0 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@309 -- # xtrace_disable 00:40:01.526 05:40:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:08.096 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:08.096 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # pci_devs=() 00:40:08.096 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:08.096 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:08.096 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:08.096 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:08.096 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:08.096 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # net_devs=() 00:40:08.096 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:08.096 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # e810=() 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@320 -- # local -ga e810 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # x722=() 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@321 -- # local -ga x722 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # mlx=() 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@322 -- # local -ga mlx 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:08.097 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:08.097 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:08.097 Found net devices under 0000:af:00.0: cvl_0_0 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:08.097 Found net devices under 0000:af:00.1: cvl_0_1 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@442 -- # is_hw=yes 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:08.097 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:08.097 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.396 ms 00:40:08.097 00:40:08.097 --- 10.0.0.2 ping statistics --- 00:40:08.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:08.097 rtt min/avg/max/mdev = 0.396/0.396/0.396/0.000 ms 00:40:08.097 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:08.097 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:08.097 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.216 ms 00:40:08.097 00:40:08.097 --- 10.0.0.1 ping statistics --- 00:40:08.097 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:08.097 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@450 -- # return 0 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:40:08.098 only one NIC for nvmf test 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:08.098 rmmod nvme_tcp 00:40:08.098 rmmod nvme_fabrics 00:40:08.098 rmmod nvme_keyring 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:08.098 05:40:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@121 -- # sync 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@124 -- # set +e 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@128 -- # set -e 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@129 -- # return 0 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@517 -- # '[' -n '' ']' 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@297 -- # iptr 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-save 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@791 -- # iptables-restore 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:09.477 00:40:09.477 real 0m8.268s 00:40:09.477 user 0m1.839s 00:40:09.477 sys 0m4.423s 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:40:09.477 ************************************ 00:40:09.477 END TEST nvmf_target_multipath 00:40:09.477 ************************************ 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:09.477 ************************************ 00:40:09.477 START TEST nvmf_zcopy 00:40:09.477 ************************************ 00:40:09.477 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:40:09.737 * Looking for test storage... 00:40:09.737 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lcov --version 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:09.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.737 --rc genhtml_branch_coverage=1 00:40:09.737 --rc genhtml_function_coverage=1 00:40:09.737 --rc genhtml_legend=1 00:40:09.737 --rc geninfo_all_blocks=1 00:40:09.737 --rc geninfo_unexecuted_blocks=1 00:40:09.737 00:40:09.737 ' 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:09.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.737 --rc genhtml_branch_coverage=1 00:40:09.737 --rc genhtml_function_coverage=1 00:40:09.737 --rc genhtml_legend=1 00:40:09.737 --rc geninfo_all_blocks=1 00:40:09.737 --rc geninfo_unexecuted_blocks=1 00:40:09.737 00:40:09.737 ' 00:40:09.737 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:09.737 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.737 --rc genhtml_branch_coverage=1 00:40:09.737 --rc genhtml_function_coverage=1 00:40:09.737 --rc genhtml_legend=1 00:40:09.737 --rc geninfo_all_blocks=1 00:40:09.737 --rc geninfo_unexecuted_blocks=1 00:40:09.737 00:40:09.737 ' 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:09.738 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:09.738 --rc genhtml_branch_coverage=1 00:40:09.738 --rc genhtml_function_coverage=1 00:40:09.738 --rc genhtml_legend=1 00:40:09.738 --rc geninfo_all_blocks=1 00:40:09.738 --rc geninfo_unexecuted_blocks=1 00:40:09.738 00:40:09.738 ' 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # : 0 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # xtrace_disable 00:40:09.738 05:40:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:16.308 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # pci_devs=() 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # net_devs=() 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # e810=() 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@320 -- # local -ga e810 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # x722=() 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # local -ga x722 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # mlx=() 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@322 -- # local -ga mlx 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:16.309 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:16.309 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:16.309 Found net devices under 0000:af:00.0: cvl_0_0 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:16.309 Found net devices under 0000:af:00.1: cvl_0_1 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@442 -- # is_hw=yes 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:16.309 05:40:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:16.309 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:16.309 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:16.309 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:16.309 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:16.309 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:16.309 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:16.309 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:16.309 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:16.309 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:16.309 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.392 ms 00:40:16.309 00:40:16.309 --- 10.0.0.2 ping statistics --- 00:40:16.309 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:16.309 rtt min/avg/max/mdev = 0.392/0.392/0.392/0.000 ms 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:16.310 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:16.310 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.201 ms 00:40:16.310 00:40:16.310 --- 10.0.0.1 ping statistics --- 00:40:16.310 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:16.310 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@450 -- # return 0 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@509 -- # nvmfpid=597206 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@510 -- # waitforlisten 597206 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 597206 ']' 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:16.310 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:16.310 [2024-12-15 05:40:29.404800] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:16.310 [2024-12-15 05:40:29.405701] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:16.310 [2024-12-15 05:40:29.405733] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:16.310 [2024-12-15 05:40:29.482760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.310 [2024-12-15 05:40:29.503618] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:16.310 [2024-12-15 05:40:29.503651] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:16.310 [2024-12-15 05:40:29.503658] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:16.310 [2024-12-15 05:40:29.503664] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:16.310 [2024-12-15 05:40:29.503669] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:16.310 [2024-12-15 05:40:29.504145] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:16.310 [2024-12-15 05:40:29.565349] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:16.310 [2024-12-15 05:40:29.565544] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:16.310 [2024-12-15 05:40:29.640807] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:16.310 [2024-12-15 05:40:29.669096] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:16.310 malloc0 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:16.310 { 00:40:16.310 "params": { 00:40:16.310 "name": "Nvme$subsystem", 00:40:16.310 "trtype": "$TEST_TRANSPORT", 00:40:16.310 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:16.310 "adrfam": "ipv4", 00:40:16.310 "trsvcid": "$NVMF_PORT", 00:40:16.310 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:16.310 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:16.310 "hdgst": ${hdgst:-false}, 00:40:16.310 "ddgst": ${ddgst:-false} 00:40:16.310 }, 00:40:16.310 "method": "bdev_nvme_attach_controller" 00:40:16.310 } 00:40:16.310 EOF 00:40:16.310 )") 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:16.310 05:40:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:16.310 "params": { 00:40:16.310 "name": "Nvme1", 00:40:16.310 "trtype": "tcp", 00:40:16.310 "traddr": "10.0.0.2", 00:40:16.310 "adrfam": "ipv4", 00:40:16.310 "trsvcid": "4420", 00:40:16.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:16.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:16.310 "hdgst": false, 00:40:16.310 "ddgst": false 00:40:16.310 }, 00:40:16.310 "method": "bdev_nvme_attach_controller" 00:40:16.310 }' 00:40:16.310 [2024-12-15 05:40:29.765331] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:16.310 [2024-12-15 05:40:29.765375] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid597226 ] 00:40:16.310 [2024-12-15 05:40:29.840508] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:16.310 [2024-12-15 05:40:29.862861] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:16.569 Running I/O for 10 seconds... 00:40:18.888 8424.00 IOPS, 65.81 MiB/s [2024-12-15T04:40:33.512Z] 8579.00 IOPS, 67.02 MiB/s [2024-12-15T04:40:34.448Z] 8598.67 IOPS, 67.18 MiB/s [2024-12-15T04:40:35.383Z] 8627.50 IOPS, 67.40 MiB/s [2024-12-15T04:40:36.319Z] 8649.00 IOPS, 67.57 MiB/s [2024-12-15T04:40:37.255Z] 8663.50 IOPS, 67.68 MiB/s [2024-12-15T04:40:38.195Z] 8679.57 IOPS, 67.81 MiB/s [2024-12-15T04:40:39.573Z] 8690.50 IOPS, 67.89 MiB/s [2024-12-15T04:40:40.509Z] 8689.22 IOPS, 67.88 MiB/s [2024-12-15T04:40:40.509Z] 8687.40 IOPS, 67.87 MiB/s 00:40:26.822 Latency(us) 00:40:26.822 [2024-12-15T04:40:40.510Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:26.823 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:40:26.823 Verification LBA range: start 0x0 length 0x1000 00:40:26.823 Nvme1n1 : 10.05 8655.44 67.62 0.00 0.00 14690.66 2215.74 43191.34 00:40:26.823 [2024-12-15T04:40:40.510Z] =================================================================================================================== 00:40:26.823 [2024-12-15T04:40:40.510Z] Total : 8655.44 67.62 0.00 0.00 14690.66 2215.74 43191.34 00:40:26.823 05:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=598996 00:40:26.823 05:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:40:26.823 05:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:26.823 05:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:40:26.823 05:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:40:26.823 05:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # config=() 00:40:26.823 05:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@560 -- # local subsystem config 00:40:26.823 05:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:40:26.823 05:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:40:26.823 { 00:40:26.823 "params": { 00:40:26.823 "name": "Nvme$subsystem", 00:40:26.823 "trtype": "$TEST_TRANSPORT", 00:40:26.823 "traddr": "$NVMF_FIRST_TARGET_IP", 00:40:26.823 "adrfam": "ipv4", 00:40:26.823 "trsvcid": "$NVMF_PORT", 00:40:26.823 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:40:26.823 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:40:26.823 "hdgst": ${hdgst:-false}, 00:40:26.823 "ddgst": ${ddgst:-false} 00:40:26.823 }, 00:40:26.823 "method": "bdev_nvme_attach_controller" 00:40:26.823 } 00:40:26.823 EOF 00:40:26.823 )") 00:40:26.823 05:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@582 -- # cat 00:40:26.823 [2024-12-15 05:40:40.380481] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.823 [2024-12-15 05:40:40.380512] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.823 05:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@584 -- # jq . 00:40:26.823 05:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@585 -- # IFS=, 00:40:26.823 05:40:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:40:26.823 "params": { 00:40:26.823 "name": "Nvme1", 00:40:26.823 "trtype": "tcp", 00:40:26.823 "traddr": "10.0.0.2", 00:40:26.823 "adrfam": "ipv4", 00:40:26.823 "trsvcid": "4420", 00:40:26.823 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:40:26.823 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:40:26.823 "hdgst": false, 00:40:26.823 "ddgst": false 00:40:26.823 }, 00:40:26.823 "method": "bdev_nvme_attach_controller" 00:40:26.823 }' 00:40:26.823 [2024-12-15 05:40:40.392445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.823 [2024-12-15 05:40:40.392459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.823 [2024-12-15 05:40:40.404441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.823 [2024-12-15 05:40:40.404452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.823 [2024-12-15 05:40:40.416440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.823 [2024-12-15 05:40:40.416450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.823 [2024-12-15 05:40:40.416833] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:26.823 [2024-12-15 05:40:40.416877] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid598996 ] 00:40:26.823 [2024-12-15 05:40:40.428441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.823 [2024-12-15 05:40:40.428452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.823 [2024-12-15 05:40:40.440441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.823 [2024-12-15 05:40:40.440451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.823 [2024-12-15 05:40:40.452441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.823 [2024-12-15 05:40:40.452452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.823 [2024-12-15 05:40:40.464443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.823 [2024-12-15 05:40:40.464454] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.823 [2024-12-15 05:40:40.476441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.823 [2024-12-15 05:40:40.476451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.823 [2024-12-15 05:40:40.488440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.823 [2024-12-15 05:40:40.488450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:26.823 [2024-12-15 05:40:40.490571] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:40:26.823 [2024-12-15 05:40:40.500446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:26.823 [2024-12-15 05:40:40.500460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.512444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.512459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.512808] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:27.082 [2024-12-15 05:40:40.524450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.524467] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.536451] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.536468] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.548447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.548463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.560443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.560456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.572444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.572458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.584450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.584466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.596453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.596472] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.608445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.608459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.620445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.620460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.632444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.632456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.644443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.644453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.656439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.656449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.082 [2024-12-15 05:40:40.668446] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.082 [2024-12-15 05:40:40.668460] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.083 [2024-12-15 05:40:40.680443] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.083 [2024-12-15 05:40:40.680455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.083 [2024-12-15 05:40:40.692440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.083 [2024-12-15 05:40:40.692450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.083 [2024-12-15 05:40:40.704440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.083 [2024-12-15 05:40:40.704450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.083 [2024-12-15 05:40:40.716444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.083 [2024-12-15 05:40:40.716458] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.083 [2024-12-15 05:40:40.728441] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.083 [2024-12-15 05:40:40.728450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.083 [2024-12-15 05:40:40.740440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.083 [2024-12-15 05:40:40.740449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.083 [2024-12-15 05:40:40.752440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.083 [2024-12-15 05:40:40.752450] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.083 [2024-12-15 05:40:40.764442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.083 [2024-12-15 05:40:40.764456] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.776440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.776451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.788439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.788449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.800439] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.800449] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.812781] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.812799] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 Running I/O for 5 seconds... 00:40:27.342 [2024-12-15 05:40:40.826588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.826607] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.841220] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.841240] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.856523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.856543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.870335] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.870354] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.885007] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.885027] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.900041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.900061] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.914293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.914312] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.929253] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.929272] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.944898] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.944916] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.959825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.959843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.974803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.974822] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:40.989320] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:40.989339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:41.004031] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:41.004050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.342 [2024-12-15 05:40:41.017108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.342 [2024-12-15 05:40:41.017127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.032301] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.032322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.046023] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.046045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.060565] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.060585] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.071636] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.071655] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.086108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.086127] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.100410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.100429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.114476] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.114495] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.128759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.128778] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.144261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.144281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.158582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.158605] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.173053] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.173072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.188436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.188455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.201343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.201362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.216537] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.216557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.229568] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.229587] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.243864] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.243883] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.257217] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.257237] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.269752] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.269770] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.602 [2024-12-15 05:40:41.281928] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.602 [2024-12-15 05:40:41.281947] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.297146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.297165] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.312616] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.312635] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.324534] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.324554] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.337767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.337787] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.352902] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.352921] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.368608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.368627] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.381186] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.381205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.394052] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.394071] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.408808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.408827] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.424507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.424535] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.437405] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.437424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.452246] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.452265] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.465062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.465081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.477710] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.477729] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.492414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.492439] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.505687] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.505706] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.520411] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.520430] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:27.861 [2024-12-15 05:40:41.533676] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:27.861 [2024-12-15 05:40:41.533695] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.120 [2024-12-15 05:40:41.548598] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.120 [2024-12-15 05:40:41.548619] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.120 [2024-12-15 05:40:41.561261] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.120 [2024-12-15 05:40:41.561281] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.120 [2024-12-15 05:40:41.575772] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.120 [2024-12-15 05:40:41.575792] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.120 [2024-12-15 05:40:41.590038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.120 [2024-12-15 05:40:41.590058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.120 [2024-12-15 05:40:41.604214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.120 [2024-12-15 05:40:41.604234] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.120 [2024-12-15 05:40:41.617553] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.120 [2024-12-15 05:40:41.617573] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.120 [2024-12-15 05:40:41.632374] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.121 [2024-12-15 05:40:41.632393] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.121 [2024-12-15 05:40:41.645656] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.121 [2024-12-15 05:40:41.645675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.121 [2024-12-15 05:40:41.660247] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.121 [2024-12-15 05:40:41.660267] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.121 [2024-12-15 05:40:41.673396] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.121 [2024-12-15 05:40:41.673416] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.121 [2024-12-15 05:40:41.688314] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.121 [2024-12-15 05:40:41.688337] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.121 [2024-12-15 05:40:41.702437] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.121 [2024-12-15 05:40:41.702457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.121 [2024-12-15 05:40:41.716523] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.121 [2024-12-15 05:40:41.716543] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.121 [2024-12-15 05:40:41.729076] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.121 [2024-12-15 05:40:41.729094] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.121 [2024-12-15 05:40:41.742077] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.121 [2024-12-15 05:40:41.742096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.121 [2024-12-15 05:40:41.756749] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.121 [2024-12-15 05:40:41.756769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.121 [2024-12-15 05:40:41.772303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.121 [2024-12-15 05:40:41.772322] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.121 [2024-12-15 05:40:41.785680] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.121 [2024-12-15 05:40:41.785697] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.121 [2024-12-15 05:40:41.800264] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.121 [2024-12-15 05:40:41.800283] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:41.814539] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.814558] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 16965.00 IOPS, 132.54 MiB/s [2024-12-15T04:40:42.067Z] [2024-12-15 05:40:41.829234] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.829252] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:41.844152] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.844171] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:41.858284] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.858303] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:41.872466] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.872485] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:41.883733] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.883753] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:41.898384] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.898401] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:41.913074] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.913092] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:41.928179] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.928196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:41.942493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.942510] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:41.956917] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.956933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:41.972545] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.972562] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:41.984520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.984539] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:41.998686] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:41.998704] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:42.013506] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:42.013523] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:42.024487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:42.024504] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:42.038501] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:42.038518] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.380 [2024-12-15 05:40:42.053001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.380 [2024-12-15 05:40:42.053018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.068367] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.068388] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.081005] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.081022] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.096906] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.096923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.111868] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.111885] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.125561] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.125578] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.140503] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.140520] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.153295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.153311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.165623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.165640] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.180197] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.180214] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.192353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.192374] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.206432] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.206451] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.221002] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.221021] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.236323] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.236341] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.249872] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.249890] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.264118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.264137] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.277312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.277331] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.292064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.292095] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.305746] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.305764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.639 [2024-12-15 05:40:42.320649] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.639 [2024-12-15 05:40:42.320668] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.333631] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.333650] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.347926] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.347945] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.362487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.362505] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.376689] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.376707] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.389001] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.389019] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.401959] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.401979] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.416423] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.416443] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.429270] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.429288] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.443883] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.443901] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.457328] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.457347] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.472245] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.472264] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.486000] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.486018] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.500766] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.500785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.515886] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.515905] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.530195] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.530213] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.544654] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.544672] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.556644] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.556663] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:28.898 [2024-12-15 05:40:42.570587] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:28.898 [2024-12-15 05:40:42.570606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.585262] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.585282] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.600228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.600246] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.614231] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.614250] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.628848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.628866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.640848] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.640866] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.653750] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.653769] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.668172] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.668191] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.682176] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.682195] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.696953] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.696973] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.711846] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.711865] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.726096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.726114] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.740358] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.740381] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.753297] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.753316] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.768258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.768277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.781842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.781861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.796450] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.796471] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.808732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.808750] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 [2024-12-15 05:40:42.822108] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.822126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.157 17030.50 IOPS, 133.05 MiB/s [2024-12-15T04:40:42.844Z] [2024-12-15 05:40:42.836738] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.157 [2024-12-15 05:40:42.836757] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:42.852593] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:42.852612] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:42.864119] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:42.864138] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:42.878064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:42.878082] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:42.892602] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:42.892622] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:42.906564] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:42.906583] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:42.921131] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:42.921149] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:42.936842] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:42.936861] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:42.952429] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:42.952448] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:42.965961] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:42.965980] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:42.980791] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:42.980810] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:42.994068] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:42.994087] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:43.008608] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:43.008632] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:43.019279] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:43.019298] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:43.033507] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:43.033527] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:43.047830] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:43.047850] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:43.062436] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:43.062455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:43.076315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:43.076334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.417 [2024-12-15 05:40:43.090402] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.417 [2024-12-15 05:40:43.090422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.104891] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.104915] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.120305] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.120324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.133390] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.133408] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.148595] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.148614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.159187] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.159205] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.173862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.173880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.188619] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.188638] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.201759] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.201777] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.216836] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.216855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.232316] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.232334] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.245845] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.245862] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.260519] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.260537] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.274050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.274072] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.288862] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.288879] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.304112] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.304130] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.317408] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.317424] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.332692] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.332711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.344803] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.344820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.677 [2024-12-15 05:40:43.358096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.677 [2024-12-15 05:40:43.358113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.372643] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.372661] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.383368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.383386] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.398364] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.398382] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.413101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.413119] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.428404] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.428422] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.442202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.442219] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.457096] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.457113] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.472295] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.472313] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.485214] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.485231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.498139] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.498156] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.513146] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.513174] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.528343] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.528361] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.542732] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.542749] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.557189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.557206] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.572611] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.572628] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.585497] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.585515] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.600269] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.600287] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:29.936 [2024-12-15 05:40:43.614580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:29.936 [2024-12-15 05:40:43.614599] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.629582] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.629600] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.644627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.644646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.655213] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.655231] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.670047] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.670065] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.684861] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.684877] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.700027] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.700045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.714228] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.714245] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.728838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.728855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.741103] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.741122] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.754303] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.754327] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.768499] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.768517] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.780747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.780764] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.794548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.794566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.809520] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.809540] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.824259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.824277] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 16968.00 IOPS, 132.56 MiB/s [2024-12-15T04:40:43.883Z] [2024-12-15 05:40:43.835485] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.835502] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.850225] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.850242] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.864895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.864912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.196 [2024-12-15 05:40:43.880366] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.196 [2024-12-15 05:40:43.880385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:43.894079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:43.894096] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:43.908923] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:43.908940] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:43.924312] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:43.924330] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:43.937913] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:43.937931] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:43.952790] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:43.952808] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:43.968670] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:43.968688] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:43.980693] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:43.980711] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:43.994341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:43.994359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:44.008859] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:44.008876] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:44.024580] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:44.024598] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:44.038414] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:44.038431] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:44.053050] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:44.053067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:44.068645] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:44.068662] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:44.082288] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:44.082306] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:44.097018] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:44.097036] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:44.112222] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:44.112241] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:44.125410] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:44.125429] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.455 [2024-12-15 05:40:44.138032] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.455 [2024-12-15 05:40:44.138050] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.714 [2024-12-15 05:40:44.153113] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.714 [2024-12-15 05:40:44.153132] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.714 [2024-12-15 05:40:44.164098] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.714 [2024-12-15 05:40:44.164124] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.714 [2024-12-15 05:40:44.178368] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.714 [2024-12-15 05:40:44.178385] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.714 [2024-12-15 05:40:44.193283] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.714 [2024-12-15 05:40:44.193301] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.714 [2024-12-15 05:40:44.208326] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.714 [2024-12-15 05:40:44.208344] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.714 [2024-12-15 05:40:44.220941] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.714 [2024-12-15 05:40:44.220959] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.714 [2024-12-15 05:40:44.234177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.714 [2024-12-15 05:40:44.234197] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.714 [2024-12-15 05:40:44.249048] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.714 [2024-12-15 05:40:44.249067] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.714 [2024-12-15 05:40:44.260935] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.714 [2024-12-15 05:40:44.260953] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.715 [2024-12-15 05:40:44.276824] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.715 [2024-12-15 05:40:44.276842] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.715 [2024-12-15 05:40:44.292409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.715 [2024-12-15 05:40:44.292427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.715 [2024-12-15 05:40:44.306265] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.715 [2024-12-15 05:40:44.306284] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.715 [2024-12-15 05:40:44.320802] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.715 [2024-12-15 05:40:44.320820] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.715 [2024-12-15 05:40:44.335965] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.715 [2024-12-15 05:40:44.335987] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.715 [2024-12-15 05:40:44.350009] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.715 [2024-12-15 05:40:44.350028] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.715 [2024-12-15 05:40:44.364344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.715 [2024-12-15 05:40:44.364363] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.715 [2024-12-15 05:40:44.377359] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.715 [2024-12-15 05:40:44.377377] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.715 [2024-12-15 05:40:44.392613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.715 [2024-12-15 05:40:44.392630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.403627] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.403646] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.418541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.418560] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.432558] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.432576] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.445321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.445340] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.460285] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.460304] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.473348] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.473367] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.488828] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.488845] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.505024] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.505042] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.520487] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.520506] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.533532] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.533550] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.548607] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.548626] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.561155] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.561173] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.573767] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.573784] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.588764] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.588782] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.605101] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.605123] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.621011] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.621030] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.636212] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.636230] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:30.974 [2024-12-15 05:40:44.650551] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:30.974 [2024-12-15 05:40:44.650569] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.665468] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.665486] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.680118] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.680136] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.693895] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.693912] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.708351] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.708369] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.722315] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.722332] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.736748] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.736765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.752409] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.752427] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.765825] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.765843] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.780541] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.780559] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.793353] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.793370] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.808207] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.808225] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.822291] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.822310] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 16913.00 IOPS, 132.13 MiB/s [2024-12-15T04:40:44.920Z] [2024-12-15 05:40:44.837165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.837183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.852512] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.852531] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.864657] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.864675] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.878783] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.878803] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.893311] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.893329] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.233 [2024-12-15 05:40:44.908296] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.233 [2024-12-15 05:40:44.908314] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:44.922747] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:44.922765] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:44.937062] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:44.937079] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:44.952021] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:44.952039] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:44.966701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:44.966718] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:44.981709] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:44.981727] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:44.996099] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:44.996126] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:45.009375] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:45.009392] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:45.020540] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:45.020557] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:45.034321] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:45.034339] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:45.048768] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:45.048785] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:45.064440] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:45.064457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:45.078623] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:45.078641] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:45.093092] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:45.093109] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:45.108985] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:45.109010] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:45.124613] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:45.124630] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:45.138504] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:45.138522] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:45.153039] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:45.153058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.493 [2024-12-15 05:40:45.168493] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.493 [2024-12-15 05:40:45.168511] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.182723] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.182741] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.197588] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.197606] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.212606] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.212624] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.225344] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.225362] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.240462] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.240480] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.253064] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.253081] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.266038] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.266056] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.280143] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.280161] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.294189] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.294207] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.308808] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.308824] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.323873] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.323891] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.338028] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.338045] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.352769] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.352786] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.368837] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.368855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.382376] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.382394] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.397202] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.397220] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.412863] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.412880] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:31.752 [2024-12-15 05:40:45.426293] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:31.752 [2024-12-15 05:40:45.426311] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.440711] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.440730] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.452165] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.452183] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.466244] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.466262] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.480774] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.480791] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.496149] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.496167] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.510696] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.510713] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.525599] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.525617] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.539701] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.539720] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.554258] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.554276] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.569041] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.569058] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.583339] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.583357] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.597597] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.597614] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.611838] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.611855] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.626548] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.626566] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.640930] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.640948] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.656232] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.656251] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.669177] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.669196] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.684259] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.684279] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.011 [2024-12-15 05:40:45.695341] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.011 [2024-12-15 05:40:45.695359] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.709904] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.709923] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.724957] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.724975] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.740260] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.740278] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.751822] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.751841] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.766079] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.766097] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.780983] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.781007] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.795915] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.795933] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.809448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.809466] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.824646] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.824665] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.836306] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.836324] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 16905.40 IOPS, 132.07 MiB/s 00:40:32.271 Latency(us) 00:40:32.271 [2024-12-15T04:40:45.958Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:40:32.271 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:40:32.271 Nvme1n1 : 5.01 16908.87 132.10 0.00 0.00 7563.54 1708.62 12732.71 00:40:32.271 [2024-12-15T04:40:45.958Z] =================================================================================================================== 00:40:32.271 [2024-12-15T04:40:45.958Z] Total : 16908.87 132.10 0.00 0.00 7563.54 1708.62 12732.71 00:40:32.271 [2024-12-15 05:40:45.844452] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.844469] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.856447] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.856463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.868457] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.868473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.880453] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.880473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.892449] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.892473] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.904448] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.904463] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.916444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.916459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.928442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.928455] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.940445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.940459] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.271 [2024-12-15 05:40:45.952444] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.271 [2024-12-15 05:40:45.952457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.531 [2024-12-15 05:40:45.964442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.531 [2024-12-15 05:40:45.964453] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.531 [2024-12-15 05:40:45.976445] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.531 [2024-12-15 05:40:45.976457] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.531 [2024-12-15 05:40:45.988442] subsystem.c:2130:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:40:32.531 [2024-12-15 05:40:45.988452] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:32.531 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (598996) - No such process 00:40:32.531 05:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 598996 00:40:32.531 05:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:40:32.531 05:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.531 05:40:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:32.531 05:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.531 05:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:40:32.531 05:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.531 05:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:32.531 delay0 00:40:32.531 05:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.531 05:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:40:32.531 05:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:32.531 05:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:32.531 05:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:32.531 05:40:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:40:32.531 [2024-12-15 05:40:46.174129] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:40:40.649 [2024-12-15 05:40:53.302254] tcp.c:1790:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1df05b0 is same with the state(6) to be set 00:40:40.649 Initializing NVMe Controllers 00:40:40.649 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:40:40.649 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:40:40.649 Initialization complete. Launching workers. 00:40:40.649 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 260, failed: 22214 00:40:40.649 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 22378, failed to submit 96 00:40:40.649 success 22269, unsuccessful 109, failed 0 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@121 -- # sync 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@124 -- # set +e 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:40.649 rmmod nvme_tcp 00:40:40.649 rmmod nvme_fabrics 00:40:40.649 rmmod nvme_keyring 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@128 -- # set -e 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # return 0 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@517 -- # '[' -n 597206 ']' 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@518 -- # killprocess 597206 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 597206 ']' 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 597206 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 597206 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 597206' 00:40:40.649 killing process with pid 597206 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 597206 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 597206 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@297 -- # iptr 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-save 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@791 -- # iptables-restore 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:40.649 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:40.650 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:40.650 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:40.650 05:40:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:42.026 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:42.026 00:40:42.026 real 0m32.517s 00:40:42.026 user 0m41.869s 00:40:42.026 sys 0m13.119s 00:40:42.026 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:42.026 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:40:42.026 ************************************ 00:40:42.026 END TEST nvmf_zcopy 00:40:42.026 ************************************ 00:40:42.026 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:42.026 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:42.026 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:42.026 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:42.285 ************************************ 00:40:42.285 START TEST nvmf_nmic 00:40:42.285 ************************************ 00:40:42.285 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:40:42.285 * Looking for test storage... 00:40:42.285 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:42.285 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:42.285 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lcov --version 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:42.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.286 --rc genhtml_branch_coverage=1 00:40:42.286 --rc genhtml_function_coverage=1 00:40:42.286 --rc genhtml_legend=1 00:40:42.286 --rc geninfo_all_blocks=1 00:40:42.286 --rc geninfo_unexecuted_blocks=1 00:40:42.286 00:40:42.286 ' 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:42.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.286 --rc genhtml_branch_coverage=1 00:40:42.286 --rc genhtml_function_coverage=1 00:40:42.286 --rc genhtml_legend=1 00:40:42.286 --rc geninfo_all_blocks=1 00:40:42.286 --rc geninfo_unexecuted_blocks=1 00:40:42.286 00:40:42.286 ' 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:42.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.286 --rc genhtml_branch_coverage=1 00:40:42.286 --rc genhtml_function_coverage=1 00:40:42.286 --rc genhtml_legend=1 00:40:42.286 --rc geninfo_all_blocks=1 00:40:42.286 --rc geninfo_unexecuted_blocks=1 00:40:42.286 00:40:42.286 ' 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:42.286 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:42.286 --rc genhtml_branch_coverage=1 00:40:42.286 --rc genhtml_function_coverage=1 00:40:42.286 --rc genhtml_legend=1 00:40:42.286 --rc geninfo_all_blocks=1 00:40:42.286 --rc geninfo_unexecuted_blocks=1 00:40:42.286 00:40:42.286 ' 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # : 0 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:42.286 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # xtrace_disable 00:40:42.287 05:40:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # pci_devs=() 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # local -a pci_devs 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # pci_net_devs=() 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # pci_drivers=() 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@317 -- # local -A pci_drivers 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # net_devs=() 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@319 -- # local -ga net_devs 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # e810=() 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@320 -- # local -ga e810 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # x722=() 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # local -ga x722 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # mlx=() 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@322 -- # local -ga mlx 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:40:48.858 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:40:48.859 Found 0000:af:00.0 (0x8086 - 0x159b) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:40:48.859 Found 0000:af:00.1 (0x8086 - 0x159b) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:40:48.859 Found net devices under 0000:af:00.0: cvl_0_0 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@418 -- # [[ up == up ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:40:48.859 Found net devices under 0000:af:00.1: cvl_0_1 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@442 -- # is_hw=yes 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:40:48.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:40:48.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:40:48.859 00:40:48.859 --- 10.0.0.2 ping statistics --- 00:40:48.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.859 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:40:48.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:40:48.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.134 ms 00:40:48.859 00:40:48.859 --- 10.0.0.1 ping statistics --- 00:40:48.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:40:48.859 rtt min/avg/max/mdev = 0.134/0.134/0.134/0.000 ms 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@450 -- # return 0 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.859 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@509 -- # nvmfpid=604365 00:40:48.860 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@510 -- # waitforlisten 604365 00:40:48.860 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:40:48.860 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 604365 ']' 00:40:48.860 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:40:48.860 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:40:48.860 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:40:48.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:40:48.860 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:40:48.860 05:41:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.860 [2024-12-15 05:41:01.866501] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:40:48.860 [2024-12-15 05:41:01.867499] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:40:48.860 [2024-12-15 05:41:01.867539] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:40:48.860 [2024-12-15 05:41:01.946312] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:40:48.860 [2024-12-15 05:41:01.970881] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:40:48.860 [2024-12-15 05:41:01.970920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:40:48.860 [2024-12-15 05:41:01.970927] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:40:48.860 [2024-12-15 05:41:01.970934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:40:48.860 [2024-12-15 05:41:01.970939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:40:48.860 [2024-12-15 05:41:01.972261] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:40:48.860 [2024-12-15 05:41:01.972370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:40:48.860 [2024-12-15 05:41:01.972278] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:40:48.860 [2024-12-15 05:41:01.972371] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:40:48.860 [2024-12-15 05:41:02.035297] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:40:48.860 [2024-12-15 05:41:02.035661] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:40:48.860 [2024-12-15 05:41:02.036215] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:40:48.860 [2024-12-15 05:41:02.036224] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:40:48.860 [2024-12-15 05:41:02.036389] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.860 [2024-12-15 05:41:02.101194] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.860 Malloc0 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.860 [2024-12-15 05:41:02.173243] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:40:48.860 test case1: single bdev can't be used in multiple subsystems 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.860 [2024-12-15 05:41:02.200860] bdev.c:8538:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:40:48.860 [2024-12-15 05:41:02.200881] subsystem.c:2160:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:40:48.860 [2024-12-15 05:41:02.200888] nvmf_rpc.c:1520:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:40:48.860 request: 00:40:48.860 { 00:40:48.860 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:40:48.860 "namespace": { 00:40:48.860 "bdev_name": "Malloc0", 00:40:48.860 "no_auto_visible": false, 00:40:48.860 "hide_metadata": false 00:40:48.860 }, 00:40:48.860 "method": "nvmf_subsystem_add_ns", 00:40:48.860 "req_id": 1 00:40:48.860 } 00:40:48.860 Got JSON-RPC error response 00:40:48.860 response: 00:40:48.860 { 00:40:48.860 "code": -32602, 00:40:48.860 "message": "Invalid parameters" 00:40:48.860 } 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:40:48.860 Adding namespace failed - expected result. 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:40:48.860 test case2: host connect to nvmf target in multiple paths 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:48.860 [2024-12-15 05:41:02.212933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:40:48.860 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:40:49.120 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:40:49.120 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:40:49.120 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:40:49.120 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:40:49.120 05:41:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:40:51.024 05:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:40:51.024 05:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:40:51.024 05:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:40:51.298 05:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:40:51.298 05:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:40:51.298 05:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:40:51.298 05:41:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:40:51.298 [global] 00:40:51.298 thread=1 00:40:51.298 invalidate=1 00:40:51.298 rw=write 00:40:51.298 time_based=1 00:40:51.298 runtime=1 00:40:51.298 ioengine=libaio 00:40:51.298 direct=1 00:40:51.298 bs=4096 00:40:51.298 iodepth=1 00:40:51.298 norandommap=0 00:40:51.298 numjobs=1 00:40:51.298 00:40:51.298 verify_dump=1 00:40:51.298 verify_backlog=512 00:40:51.298 verify_state_save=0 00:40:51.298 do_verify=1 00:40:51.298 verify=crc32c-intel 00:40:51.298 [job0] 00:40:51.298 filename=/dev/nvme0n1 00:40:51.298 Could not set queue depth (nvme0n1) 00:40:51.559 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:40:51.559 fio-3.35 00:40:51.559 Starting 1 thread 00:40:52.489 00:40:52.490 job0: (groupid=0, jobs=1): err= 0: pid=605063: Sun Dec 15 05:41:06 2024 00:40:52.490 read: IOPS=2541, BW=9.93MiB/s (10.4MB/s)(9.94MiB/1001msec) 00:40:52.490 slat (nsec): min=6803, max=37205, avg=7822.58, stdev=1279.09 00:40:52.490 clat (usec): min=179, max=319, avg=235.12, stdev=22.28 00:40:52.490 lat (usec): min=187, max=328, avg=242.94, stdev=22.24 00:40:52.490 clat percentiles (usec): 00:40:52.490 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 206], 00:40:52.490 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:40:52.490 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 253], 00:40:52.490 | 99.00th=[ 258], 99.50th=[ 260], 99.90th=[ 265], 99.95th=[ 269], 00:40:52.490 | 99.99th=[ 322] 00:40:52.490 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:40:52.490 slat (nsec): min=9607, max=39035, avg=11093.84, stdev=1459.88 00:40:52.490 clat (usec): min=117, max=224, avg=132.06, stdev= 7.09 00:40:52.490 lat (usec): min=127, max=263, avg=143.15, stdev= 7.50 00:40:52.490 clat percentiles (usec): 00:40:52.490 | 1.00th=[ 122], 5.00th=[ 125], 10.00th=[ 126], 20.00th=[ 128], 00:40:52.490 | 30.00th=[ 129], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 133], 00:40:52.490 | 70.00th=[ 135], 80.00th=[ 135], 90.00th=[ 139], 95.00th=[ 143], 00:40:52.490 | 99.00th=[ 159], 99.50th=[ 180], 99.90th=[ 188], 99.95th=[ 194], 00:40:52.490 | 99.99th=[ 225] 00:40:52.490 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:40:52.490 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:40:52.490 lat (usec) : 250=92.48%, 500=7.52% 00:40:52.490 cpu : usr=4.00%, sys=8.00%, ctx=5104, majf=0, minf=1 00:40:52.490 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:40:52.490 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:52.490 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:40:52.490 issued rwts: total=2544,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:40:52.490 latency : target=0, window=0, percentile=100.00%, depth=1 00:40:52.490 00:40:52.490 Run status group 0 (all jobs): 00:40:52.490 READ: bw=9.93MiB/s (10.4MB/s), 9.93MiB/s-9.93MiB/s (10.4MB/s-10.4MB/s), io=9.94MiB (10.4MB), run=1001-1001msec 00:40:52.490 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:40:52.490 00:40:52.490 Disk stats (read/write): 00:40:52.490 nvme0n1: ios=2199/2560, merge=0/0, ticks=500/322, in_queue=822, util=91.08% 00:40:52.490 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:40:52.747 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@516 -- # nvmfcleanup 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@121 -- # sync 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@124 -- # set +e 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # for i in {1..20} 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:40:52.747 rmmod nvme_tcp 00:40:52.747 rmmod nvme_fabrics 00:40:52.747 rmmod nvme_keyring 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@128 -- # set -e 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # return 0 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@517 -- # '[' -n 604365 ']' 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@518 -- # killprocess 604365 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 604365 ']' 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 604365 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:40:52.747 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 604365 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 604365' 00:40:53.006 killing process with pid 604365 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 604365 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 604365 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@297 -- # iptr 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-save 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@791 -- # iptables-restore 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@302 -- # remove_spdk_ns 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:53.006 05:41:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:40:55.542 00:40:55.542 real 0m12.968s 00:40:55.542 user 0m23.878s 00:40:55.542 sys 0m6.018s 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:40:55.542 ************************************ 00:40:55.542 END TEST nvmf_nmic 00:40:55.542 ************************************ 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:40:55.542 ************************************ 00:40:55.542 START TEST nvmf_fio_target 00:40:55.542 ************************************ 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:40:55.542 * Looking for test storage... 00:40:55.542 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lcov --version 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:40:55.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.542 --rc genhtml_branch_coverage=1 00:40:55.542 --rc genhtml_function_coverage=1 00:40:55.542 --rc genhtml_legend=1 00:40:55.542 --rc geninfo_all_blocks=1 00:40:55.542 --rc geninfo_unexecuted_blocks=1 00:40:55.542 00:40:55.542 ' 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:40:55.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.542 --rc genhtml_branch_coverage=1 00:40:55.542 --rc genhtml_function_coverage=1 00:40:55.542 --rc genhtml_legend=1 00:40:55.542 --rc geninfo_all_blocks=1 00:40:55.542 --rc geninfo_unexecuted_blocks=1 00:40:55.542 00:40:55.542 ' 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:40:55.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.542 --rc genhtml_branch_coverage=1 00:40:55.542 --rc genhtml_function_coverage=1 00:40:55.542 --rc genhtml_legend=1 00:40:55.542 --rc geninfo_all_blocks=1 00:40:55.542 --rc geninfo_unexecuted_blocks=1 00:40:55.542 00:40:55.542 ' 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:40:55.542 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:40:55.542 --rc genhtml_branch_coverage=1 00:40:55.542 --rc genhtml_function_coverage=1 00:40:55.542 --rc genhtml_legend=1 00:40:55.542 --rc geninfo_all_blocks=1 00:40:55.542 --rc geninfo_unexecuted_blocks=1 00:40:55.542 00:40:55.542 ' 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:40:55.542 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # : 0 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@55 -- # have_pci_nics=0 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@476 -- # prepare_net_devs 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@438 -- # local -g is_hw=no 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@440 -- # remove_spdk_ns 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # xtrace_disable 00:40:55.543 05:41:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # pci_devs=() 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # net_devs=() 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # e810=() 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@320 -- # local -ga e810 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # x722=() 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # local -ga x722 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # mlx=() 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@322 -- # local -ga mlx 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:02.112 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:02.112 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:02.112 Found net devices under 0000:af:00.0: cvl_0_0 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:02.112 Found net devices under 0000:af:00.1: cvl_0_1 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@442 -- # is_hw=yes 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:02.112 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:02.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:02.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.275 ms 00:41:02.113 00:41:02.113 --- 10.0.0.2 ping statistics --- 00:41:02.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:02.113 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:02.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:02.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.106 ms 00:41:02.113 00:41:02.113 --- 10.0.0.1 ping statistics --- 00:41:02.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:02.113 rtt min/avg/max/mdev = 0.106/0.106/0.106/0.000 ms 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@450 -- # return 0 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@509 -- # nvmfpid=608610 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@510 -- # waitforlisten 608610 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 608610 ']' 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:02.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:02.113 05:41:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:02.113 [2024-12-15 05:41:14.860154] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:02.113 [2024-12-15 05:41:14.861072] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:02.113 [2024-12-15 05:41:14.861106] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:02.113 [2024-12-15 05:41:14.938001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:02.113 [2024-12-15 05:41:14.961128] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:02.113 [2024-12-15 05:41:14.961167] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:02.113 [2024-12-15 05:41:14.961175] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:02.113 [2024-12-15 05:41:14.961182] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:02.113 [2024-12-15 05:41:14.961188] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:02.113 [2024-12-15 05:41:14.962511] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:02.113 [2024-12-15 05:41:14.962621] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:02.113 [2024-12-15 05:41:14.962755] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:02.113 [2024-12-15 05:41:14.962756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:02.113 [2024-12-15 05:41:15.026793] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:02.113 [2024-12-15 05:41:15.027720] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:02.113 [2024-12-15 05:41:15.027967] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:02.113 [2024-12-15 05:41:15.028353] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:02.113 [2024-12-15 05:41:15.028383] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:02.113 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:02.113 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:41:02.113 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:02.113 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:02.113 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:02.113 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:02.113 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:41:02.113 [2024-12-15 05:41:15.267488] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:02.113 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:02.113 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:41:02.113 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:02.113 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:41:02.113 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:02.372 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:41:02.372 05:41:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:02.630 05:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:41:02.630 05:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:41:02.889 05:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:02.889 05:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:41:02.889 05:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:03.147 05:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:41:03.147 05:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:41:03.406 05:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:41:03.406 05:41:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:41:03.664 05:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:03.664 05:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:03.664 05:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:03.921 05:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:41:03.921 05:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:41:04.177 05:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:04.434 [2024-12-15 05:41:17.895361] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:04.434 05:41:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:41:04.690 05:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:41:04.690 05:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:04.947 05:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:41:04.947 05:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:41:04.947 05:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:04.947 05:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:41:04.947 05:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:41:04.947 05:41:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:41:07.472 05:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:07.472 05:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:07.472 05:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:07.472 05:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:41:07.472 05:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:07.472 05:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:41:07.472 05:41:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:41:07.472 [global] 00:41:07.472 thread=1 00:41:07.472 invalidate=1 00:41:07.472 rw=write 00:41:07.472 time_based=1 00:41:07.472 runtime=1 00:41:07.472 ioengine=libaio 00:41:07.472 direct=1 00:41:07.472 bs=4096 00:41:07.472 iodepth=1 00:41:07.472 norandommap=0 00:41:07.472 numjobs=1 00:41:07.472 00:41:07.472 verify_dump=1 00:41:07.472 verify_backlog=512 00:41:07.472 verify_state_save=0 00:41:07.472 do_verify=1 00:41:07.472 verify=crc32c-intel 00:41:07.472 [job0] 00:41:07.472 filename=/dev/nvme0n1 00:41:07.472 [job1] 00:41:07.472 filename=/dev/nvme0n2 00:41:07.472 [job2] 00:41:07.472 filename=/dev/nvme0n3 00:41:07.472 [job3] 00:41:07.472 filename=/dev/nvme0n4 00:41:07.472 Could not set queue depth (nvme0n1) 00:41:07.472 Could not set queue depth (nvme0n2) 00:41:07.472 Could not set queue depth (nvme0n3) 00:41:07.472 Could not set queue depth (nvme0n4) 00:41:07.472 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:07.472 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:07.472 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:07.472 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:07.472 fio-3.35 00:41:07.472 Starting 4 threads 00:41:08.846 00:41:08.846 job0: (groupid=0, jobs=1): err= 0: pid=609857: Sun Dec 15 05:41:22 2024 00:41:08.846 read: IOPS=2294, BW=9179KiB/s (9399kB/s)(9188KiB/1001msec) 00:41:08.846 slat (nsec): min=6375, max=25785, avg=7174.67, stdev=907.84 00:41:08.846 clat (usec): min=192, max=417, avg=235.19, stdev=15.54 00:41:08.846 lat (usec): min=199, max=424, avg=242.37, stdev=15.57 00:41:08.846 clat percentiles (usec): 00:41:08.846 | 1.00th=[ 212], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 221], 00:41:08.846 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 237], 60.00th=[ 243], 00:41:08.846 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 255], 00:41:08.846 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 363], 99.95th=[ 408], 00:41:08.846 | 99.99th=[ 416] 00:41:08.846 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:41:08.846 slat (nsec): min=8945, max=38727, avg=10204.91, stdev=1256.11 00:41:08.846 clat (usec): min=124, max=555, avg=158.92, stdev=33.72 00:41:08.846 lat (usec): min=135, max=565, avg=169.13, stdev=33.79 00:41:08.846 clat percentiles (usec): 00:41:08.846 | 1.00th=[ 133], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 141], 00:41:08.846 | 30.00th=[ 143], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 147], 00:41:08.846 | 70.00th=[ 151], 80.00th=[ 163], 90.00th=[ 217], 95.00th=[ 229], 00:41:08.846 | 99.00th=[ 293], 99.50th=[ 314], 99.90th=[ 322], 99.95th=[ 330], 00:41:08.846 | 99.99th=[ 553] 00:41:08.846 bw ( KiB/s): min=10272, max=10272, per=64.76%, avg=10272.00, stdev= 0.00, samples=1 00:41:08.846 iops : min= 2568, max= 2568, avg=2568.00, stdev= 0.00, samples=1 00:41:08.846 lat (usec) : 250=91.27%, 500=8.71%, 750=0.02% 00:41:08.846 cpu : usr=2.00%, sys=4.80%, ctx=4857, majf=0, minf=2 00:41:08.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:08.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.846 issued rwts: total=2297,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:08.846 job1: (groupid=0, jobs=1): err= 0: pid=609858: Sun Dec 15 05:41:22 2024 00:41:08.846 read: IOPS=21, BW=87.0KiB/s (89.0kB/s)(88.0KiB/1012msec) 00:41:08.846 slat (nsec): min=9166, max=25422, avg=21891.91, stdev=2947.52 00:41:08.846 clat (usec): min=40846, max=41636, avg=40995.60, stdev=157.90 00:41:08.846 lat (usec): min=40869, max=41645, avg=41017.49, stdev=155.19 00:41:08.846 clat percentiles (usec): 00:41:08.846 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:41:08.846 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:08.846 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:08.846 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:08.846 | 99.99th=[41681] 00:41:08.846 write: IOPS=505, BW=2024KiB/s (2072kB/s)(2048KiB/1012msec); 0 zone resets 00:41:08.846 slat (nsec): min=9092, max=35220, avg=10337.48, stdev=1744.98 00:41:08.846 clat (usec): min=129, max=559, avg=200.55, stdev=36.08 00:41:08.846 lat (usec): min=138, max=570, avg=210.89, stdev=36.64 00:41:08.846 clat percentiles (usec): 00:41:08.846 | 1.00th=[ 135], 5.00th=[ 147], 10.00th=[ 155], 20.00th=[ 172], 00:41:08.846 | 30.00th=[ 186], 40.00th=[ 196], 50.00th=[ 202], 60.00th=[ 208], 00:41:08.846 | 70.00th=[ 217], 80.00th=[ 223], 90.00th=[ 237], 95.00th=[ 251], 00:41:08.846 | 99.00th=[ 281], 99.50th=[ 379], 99.90th=[ 562], 99.95th=[ 562], 00:41:08.846 | 99.99th=[ 562] 00:41:08.846 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:41:08.846 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:08.846 lat (usec) : 250=91.01%, 500=4.68%, 750=0.19% 00:41:08.846 lat (msec) : 50=4.12% 00:41:08.846 cpu : usr=0.30%, sys=0.40%, ctx=534, majf=0, minf=2 00:41:08.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:08.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.846 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:08.846 job2: (groupid=0, jobs=1): err= 0: pid=609859: Sun Dec 15 05:41:22 2024 00:41:08.846 read: IOPS=21, BW=85.2KiB/s (87.2kB/s)(88.0KiB/1033msec) 00:41:08.846 slat (nsec): min=10570, max=24368, avg=13431.82, stdev=3402.73 00:41:08.846 clat (usec): min=40907, max=41923, avg=41029.00, stdev=202.79 00:41:08.846 lat (usec): min=40917, max=41948, avg=41042.43, stdev=205.35 00:41:08.846 clat percentiles (usec): 00:41:08.846 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:41:08.846 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:08.846 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:08.846 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:41:08.846 | 99.99th=[41681] 00:41:08.846 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:41:08.846 slat (nsec): min=11171, max=47913, avg=13040.39, stdev=2280.43 00:41:08.846 clat (usec): min=163, max=321, avg=236.88, stdev=14.97 00:41:08.846 lat (usec): min=175, max=369, avg=249.92, stdev=15.38 00:41:08.846 clat percentiles (usec): 00:41:08.846 | 1.00th=[ 174], 5.00th=[ 221], 10.00th=[ 229], 20.00th=[ 233], 00:41:08.846 | 30.00th=[ 237], 40.00th=[ 237], 50.00th=[ 239], 60.00th=[ 241], 00:41:08.846 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 249], 95.00th=[ 253], 00:41:08.846 | 99.00th=[ 265], 99.50th=[ 269], 99.90th=[ 322], 99.95th=[ 322], 00:41:08.846 | 99.99th=[ 322] 00:41:08.846 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:41:08.846 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:08.846 lat (usec) : 250=88.58%, 500=7.30% 00:41:08.846 lat (msec) : 50=4.12% 00:41:08.846 cpu : usr=0.48%, sys=0.87%, ctx=534, majf=0, minf=1 00:41:08.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:08.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.846 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.846 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.846 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:08.846 job3: (groupid=0, jobs=1): err= 0: pid=609860: Sun Dec 15 05:41:22 2024 00:41:08.846 read: IOPS=22, BW=90.2KiB/s (92.4kB/s)(92.0KiB/1020msec) 00:41:08.846 slat (nsec): min=9451, max=26258, avg=21371.39, stdev=3102.70 00:41:08.846 clat (usec): min=1592, max=41402, avg=39260.98, stdev=8212.05 00:41:08.846 lat (usec): min=1614, max=41417, avg=39282.36, stdev=8211.94 00:41:08.846 clat percentiles (usec): 00:41:08.846 | 1.00th=[ 1598], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:41:08.846 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:08.846 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:08.846 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:08.846 | 99.99th=[41157] 00:41:08.846 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:41:08.846 slat (nsec): min=10152, max=42818, avg=11939.07, stdev=2551.77 00:41:08.846 clat (usec): min=143, max=546, avg=211.30, stdev=27.78 00:41:08.846 lat (usec): min=154, max=560, avg=223.24, stdev=28.28 00:41:08.846 clat percentiles (usec): 00:41:08.846 | 1.00th=[ 159], 5.00th=[ 172], 10.00th=[ 180], 20.00th=[ 192], 00:41:08.846 | 30.00th=[ 202], 40.00th=[ 208], 50.00th=[ 212], 60.00th=[ 217], 00:41:08.846 | 70.00th=[ 221], 80.00th=[ 225], 90.00th=[ 237], 95.00th=[ 253], 00:41:08.846 | 99.00th=[ 269], 99.50th=[ 297], 99.90th=[ 545], 99.95th=[ 545], 00:41:08.846 | 99.99th=[ 545] 00:41:08.846 bw ( KiB/s): min= 4096, max= 4096, per=25.83%, avg=4096.00, stdev= 0.00, samples=1 00:41:08.846 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:08.846 lat (usec) : 250=90.28%, 500=5.23%, 750=0.19% 00:41:08.846 lat (msec) : 2=0.19%, 50=4.11% 00:41:08.846 cpu : usr=0.69%, sys=0.69%, ctx=535, majf=0, minf=1 00:41:08.846 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:08.846 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:08.847 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:08.847 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:08.847 00:41:08.847 Run status group 0 (all jobs): 00:41:08.847 READ: bw=9154KiB/s (9374kB/s), 85.2KiB/s-9179KiB/s (87.2kB/s-9399kB/s), io=9456KiB (9683kB), run=1001-1033msec 00:41:08.847 WRITE: bw=15.5MiB/s (16.2MB/s), 1983KiB/s-9.99MiB/s (2030kB/s-10.5MB/s), io=16.0MiB (16.8MB), run=1001-1033msec 00:41:08.847 00:41:08.847 Disk stats (read/write): 00:41:08.847 nvme0n1: ios=2058/2048, merge=0/0, ticks=477/319, in_queue=796, util=86.67% 00:41:08.847 nvme0n2: ios=33/512, merge=0/0, ticks=749/95, in_queue=844, util=86.98% 00:41:08.847 nvme0n3: ios=17/512, merge=0/0, ticks=697/118, in_queue=815, util=88.94% 00:41:08.847 nvme0n4: ios=17/512, merge=0/0, ticks=697/101, in_queue=798, util=89.59% 00:41:08.847 05:41:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:41:08.847 [global] 00:41:08.847 thread=1 00:41:08.847 invalidate=1 00:41:08.847 rw=randwrite 00:41:08.847 time_based=1 00:41:08.847 runtime=1 00:41:08.847 ioengine=libaio 00:41:08.847 direct=1 00:41:08.847 bs=4096 00:41:08.847 iodepth=1 00:41:08.847 norandommap=0 00:41:08.847 numjobs=1 00:41:08.847 00:41:08.847 verify_dump=1 00:41:08.847 verify_backlog=512 00:41:08.847 verify_state_save=0 00:41:08.847 do_verify=1 00:41:08.847 verify=crc32c-intel 00:41:08.847 [job0] 00:41:08.847 filename=/dev/nvme0n1 00:41:08.847 [job1] 00:41:08.847 filename=/dev/nvme0n2 00:41:08.847 [job2] 00:41:08.847 filename=/dev/nvme0n3 00:41:08.847 [job3] 00:41:08.847 filename=/dev/nvme0n4 00:41:08.847 Could not set queue depth (nvme0n1) 00:41:08.847 Could not set queue depth (nvme0n2) 00:41:08.847 Could not set queue depth (nvme0n3) 00:41:08.847 Could not set queue depth (nvme0n4) 00:41:09.104 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:09.104 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:09.104 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:09.104 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:09.104 fio-3.35 00:41:09.104 Starting 4 threads 00:41:10.474 00:41:10.474 job0: (groupid=0, jobs=1): err= 0: pid=610231: Sun Dec 15 05:41:23 2024 00:41:10.474 read: IOPS=22, BW=89.1KiB/s (91.3kB/s)(92.0KiB/1032msec) 00:41:10.474 slat (nsec): min=9774, max=21401, avg=14580.91, stdev=3837.19 00:41:10.474 clat (usec): min=291, max=41308, avg=39208.70, stdev=8484.56 00:41:10.474 lat (usec): min=306, max=41318, avg=39223.28, stdev=8484.37 00:41:10.474 clat percentiles (usec): 00:41:10.474 | 1.00th=[ 293], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:41:10.474 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:10.474 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:41:10.474 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:41:10.474 | 99.99th=[41157] 00:41:10.474 write: IOPS=496, BW=1984KiB/s (2032kB/s)(2048KiB/1032msec); 0 zone resets 00:41:10.474 slat (nsec): min=10725, max=38638, avg=12375.71, stdev=2419.39 00:41:10.474 clat (usec): min=148, max=307, avg=237.58, stdev=13.16 00:41:10.474 lat (usec): min=162, max=346, avg=249.96, stdev=13.04 00:41:10.474 clat percentiles (usec): 00:41:10.474 | 1.00th=[ 163], 5.00th=[ 231], 10.00th=[ 233], 20.00th=[ 237], 00:41:10.474 | 30.00th=[ 237], 40.00th=[ 239], 50.00th=[ 239], 60.00th=[ 241], 00:41:10.474 | 70.00th=[ 241], 80.00th=[ 243], 90.00th=[ 245], 95.00th=[ 247], 00:41:10.474 | 99.00th=[ 251], 99.50th=[ 255], 99.90th=[ 310], 99.95th=[ 310], 00:41:10.474 | 99.99th=[ 310] 00:41:10.474 bw ( KiB/s): min= 4096, max= 4096, per=15.88%, avg=4096.00, stdev= 0.00, samples=1 00:41:10.474 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:41:10.474 lat (usec) : 250=94.39%, 500=1.50% 00:41:10.474 lat (msec) : 50=4.11% 00:41:10.474 cpu : usr=0.87%, sys=0.48%, ctx=535, majf=0, minf=1 00:41:10.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:10.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.474 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:10.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:10.474 job1: (groupid=0, jobs=1): err= 0: pid=610232: Sun Dec 15 05:41:23 2024 00:41:10.474 read: IOPS=1571, BW=6288KiB/s (6439kB/s)(6332KiB/1007msec) 00:41:10.474 slat (nsec): min=6464, max=24422, avg=7278.80, stdev=955.21 00:41:10.474 clat (usec): min=203, max=41317, avg=385.53, stdev=1770.33 00:41:10.474 lat (usec): min=210, max=41325, avg=392.80, stdev=1770.57 00:41:10.474 clat percentiles (usec): 00:41:10.474 | 1.00th=[ 212], 5.00th=[ 225], 10.00th=[ 231], 20.00th=[ 245], 00:41:10.474 | 30.00th=[ 265], 40.00th=[ 277], 50.00th=[ 293], 60.00th=[ 310], 00:41:10.474 | 70.00th=[ 322], 80.00th=[ 351], 90.00th=[ 441], 95.00th=[ 498], 00:41:10.474 | 99.00th=[ 515], 99.50th=[ 570], 99.90th=[40633], 99.95th=[41157], 00:41:10.474 | 99.99th=[41157] 00:41:10.474 write: IOPS=2033, BW=8135KiB/s (8330kB/s)(8192KiB/1007msec); 0 zone resets 00:41:10.474 slat (nsec): min=9191, max=41918, avg=10330.24, stdev=1483.76 00:41:10.474 clat (usec): min=133, max=1388, avg=173.71, stdev=33.29 00:41:10.474 lat (usec): min=147, max=1397, avg=184.04, stdev=33.41 00:41:10.474 clat percentiles (usec): 00:41:10.474 | 1.00th=[ 145], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:41:10.474 | 30.00th=[ 165], 40.00th=[ 169], 50.00th=[ 172], 60.00th=[ 176], 00:41:10.474 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 192], 95.00th=[ 198], 00:41:10.474 | 99.00th=[ 217], 99.50th=[ 245], 99.90th=[ 494], 99.95th=[ 553], 00:41:10.474 | 99.99th=[ 1385] 00:41:10.474 bw ( KiB/s): min= 8192, max= 8192, per=31.75%, avg=8192.00, stdev= 0.00, samples=2 00:41:10.474 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:41:10.474 lat (usec) : 250=66.18%, 500=31.86%, 750=1.85% 00:41:10.474 lat (msec) : 2=0.03%, 50=0.08% 00:41:10.474 cpu : usr=1.39%, sys=3.78%, ctx=3634, majf=0, minf=1 00:41:10.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:10.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.474 issued rwts: total=1583,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:10.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:10.474 job2: (groupid=0, jobs=1): err= 0: pid=610233: Sun Dec 15 05:41:23 2024 00:41:10.474 read: IOPS=2043, BW=8176KiB/s (8372kB/s)(8184KiB/1001msec) 00:41:10.474 slat (nsec): min=6881, max=22977, avg=7830.58, stdev=874.66 00:41:10.474 clat (usec): min=191, max=758, avg=282.69, stdev=69.62 00:41:10.474 lat (usec): min=199, max=766, avg=290.52, stdev=69.70 00:41:10.474 clat percentiles (usec): 00:41:10.474 | 1.00th=[ 212], 5.00th=[ 229], 10.00th=[ 235], 20.00th=[ 241], 00:41:10.474 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 262], 00:41:10.474 | 70.00th=[ 277], 80.00th=[ 318], 90.00th=[ 392], 95.00th=[ 449], 00:41:10.474 | 99.00th=[ 510], 99.50th=[ 515], 99.90th=[ 668], 99.95th=[ 742], 00:41:10.474 | 99.99th=[ 758] 00:41:10.474 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:41:10.474 slat (nsec): min=9495, max=38159, avg=10789.88, stdev=1553.80 00:41:10.474 clat (usec): min=135, max=285, avg=181.28, stdev=22.10 00:41:10.474 lat (usec): min=145, max=323, avg=192.07, stdev=22.40 00:41:10.474 clat percentiles (usec): 00:41:10.474 | 1.00th=[ 149], 5.00th=[ 157], 10.00th=[ 161], 20.00th=[ 165], 00:41:10.474 | 30.00th=[ 169], 40.00th=[ 174], 50.00th=[ 178], 60.00th=[ 180], 00:41:10.474 | 70.00th=[ 184], 80.00th=[ 192], 90.00th=[ 208], 95.00th=[ 239], 00:41:10.474 | 99.00th=[ 249], 99.50th=[ 253], 99.90th=[ 273], 99.95th=[ 277], 00:41:10.474 | 99.99th=[ 285] 00:41:10.474 bw ( KiB/s): min= 8192, max= 8192, per=31.75%, avg=8192.00, stdev= 0.00, samples=1 00:41:10.474 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:41:10.474 lat (usec) : 250=71.42%, 500=27.45%, 750=1.10%, 1000=0.02% 00:41:10.474 cpu : usr=3.40%, sys=6.30%, ctx=4094, majf=0, minf=1 00:41:10.474 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:10.474 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.474 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.474 issued rwts: total=2046,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:10.474 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:10.474 job3: (groupid=0, jobs=1): err= 0: pid=610234: Sun Dec 15 05:41:23 2024 00:41:10.474 read: IOPS=1932, BW=7728KiB/s (7914kB/s)(7968KiB/1031msec) 00:41:10.474 slat (nsec): min=7367, max=25254, avg=8842.03, stdev=1222.78 00:41:10.474 clat (usec): min=207, max=41115, avg=307.63, stdev=1576.50 00:41:10.474 lat (usec): min=215, max=41126, avg=316.47, stdev=1576.54 00:41:10.475 clat percentiles (usec): 00:41:10.475 | 1.00th=[ 221], 5.00th=[ 231], 10.00th=[ 237], 20.00th=[ 239], 00:41:10.475 | 30.00th=[ 243], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:41:10.475 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 258], 00:41:10.475 | 99.00th=[ 289], 99.50th=[ 412], 99.90th=[40633], 99.95th=[41157], 00:41:10.475 | 99.99th=[41157] 00:41:10.475 write: IOPS=1986, BW=7946KiB/s (8136kB/s)(8192KiB/1031msec); 0 zone resets 00:41:10.475 slat (nsec): min=10369, max=48624, avg=12210.09, stdev=2027.54 00:41:10.475 clat (usec): min=135, max=268, avg=176.88, stdev=16.14 00:41:10.475 lat (usec): min=154, max=285, avg=189.09, stdev=16.30 00:41:10.475 clat percentiles (usec): 00:41:10.475 | 1.00th=[ 151], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 165], 00:41:10.475 | 30.00th=[ 169], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:41:10.475 | 70.00th=[ 180], 80.00th=[ 186], 90.00th=[ 196], 95.00th=[ 212], 00:41:10.475 | 99.00th=[ 237], 99.50th=[ 245], 99.90th=[ 269], 99.95th=[ 269], 00:41:10.475 | 99.99th=[ 269] 00:41:10.475 bw ( KiB/s): min= 6792, max= 9592, per=31.75%, avg=8192.00, stdev=1979.90, samples=2 00:41:10.475 iops : min= 1698, max= 2398, avg=2048.00, stdev=494.97, samples=2 00:41:10.475 lat (usec) : 250=87.97%, 500=11.93% 00:41:10.475 lat (msec) : 2=0.02%, 50=0.07% 00:41:10.475 cpu : usr=3.59%, sys=6.31%, ctx=4041, majf=0, minf=1 00:41:10.475 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:10.475 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.475 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:10.475 issued rwts: total=1992,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:10.475 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:10.475 00:41:10.475 Run status group 0 (all jobs): 00:41:10.475 READ: bw=21.4MiB/s (22.4MB/s), 89.1KiB/s-8176KiB/s (91.3kB/s-8372kB/s), io=22.0MiB (23.1MB), run=1001-1032msec 00:41:10.475 WRITE: bw=25.2MiB/s (26.4MB/s), 1984KiB/s-8184KiB/s (2032kB/s-8380kB/s), io=26.0MiB (27.3MB), run=1001-1032msec 00:41:10.475 00:41:10.475 Disk stats (read/write): 00:41:10.475 nvme0n1: ios=68/512, merge=0/0, ticks=720/115, in_queue=835, util=87.07% 00:41:10.475 nvme0n2: ios=1567/1951, merge=0/0, ticks=1457/332, in_queue=1789, util=99.39% 00:41:10.475 nvme0n3: ios=1593/1970, merge=0/0, ticks=501/343, in_queue=844, util=91.16% 00:41:10.475 nvme0n4: ios=1817/2048, merge=0/0, ticks=1010/343, in_queue=1353, util=96.02% 00:41:10.475 05:41:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:41:10.475 [global] 00:41:10.475 thread=1 00:41:10.475 invalidate=1 00:41:10.475 rw=write 00:41:10.475 time_based=1 00:41:10.475 runtime=1 00:41:10.475 ioengine=libaio 00:41:10.475 direct=1 00:41:10.475 bs=4096 00:41:10.475 iodepth=128 00:41:10.475 norandommap=0 00:41:10.475 numjobs=1 00:41:10.475 00:41:10.475 verify_dump=1 00:41:10.475 verify_backlog=512 00:41:10.475 verify_state_save=0 00:41:10.475 do_verify=1 00:41:10.475 verify=crc32c-intel 00:41:10.475 [job0] 00:41:10.475 filename=/dev/nvme0n1 00:41:10.475 [job1] 00:41:10.475 filename=/dev/nvme0n2 00:41:10.475 [job2] 00:41:10.475 filename=/dev/nvme0n3 00:41:10.475 [job3] 00:41:10.475 filename=/dev/nvme0n4 00:41:10.475 Could not set queue depth (nvme0n1) 00:41:10.475 Could not set queue depth (nvme0n2) 00:41:10.475 Could not set queue depth (nvme0n3) 00:41:10.475 Could not set queue depth (nvme0n4) 00:41:10.475 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:10.475 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:10.475 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:10.475 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:10.475 fio-3.35 00:41:10.475 Starting 4 threads 00:41:11.846 00:41:11.846 job0: (groupid=0, jobs=1): err= 0: pid=610592: Sun Dec 15 05:41:25 2024 00:41:11.846 read: IOPS=6368, BW=24.9MiB/s (26.1MB/s)(25.0MiB/1005msec) 00:41:11.846 slat (nsec): min=1492, max=5206.9k, avg=75889.56, stdev=478469.47 00:41:11.846 clat (usec): min=706, max=15430, avg=9993.23, stdev=1630.67 00:41:11.846 lat (usec): min=5075, max=15437, avg=10069.12, stdev=1659.50 00:41:11.846 clat percentiles (usec): 00:41:11.846 | 1.00th=[ 5473], 5.00th=[ 7373], 10.00th=[ 8160], 20.00th=[ 8848], 00:41:11.846 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9896], 60.00th=[10290], 00:41:11.846 | 70.00th=[10552], 80.00th=[11076], 90.00th=[12387], 95.00th=[13042], 00:41:11.846 | 99.00th=[13698], 99.50th=[14091], 99.90th=[15139], 99.95th=[15139], 00:41:11.846 | 99.99th=[15401] 00:41:11.846 write: IOPS=6622, BW=25.9MiB/s (27.1MB/s)(26.0MiB/1005msec); 0 zone resets 00:41:11.846 slat (usec): min=2, max=4943, avg=71.79, stdev=453.10 00:41:11.846 clat (usec): min=3320, max=14967, avg=9530.92, stdev=1554.04 00:41:11.846 lat (usec): min=3333, max=14984, avg=9602.71, stdev=1611.49 00:41:11.846 clat percentiles (usec): 00:41:11.846 | 1.00th=[ 5145], 5.00th=[ 5276], 10.00th=[ 7570], 20.00th=[ 9372], 00:41:11.846 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:41:11.846 | 70.00th=[10159], 80.00th=[10290], 90.00th=[10552], 95.00th=[11076], 00:41:11.846 | 99.00th=[13698], 99.50th=[14222], 99.90th=[14746], 99.95th=[14746], 00:41:11.846 | 99.99th=[15008] 00:41:11.846 bw ( KiB/s): min=25224, max=28024, per=37.50%, avg=26624.00, stdev=1979.90, samples=2 00:41:11.847 iops : min= 6306, max= 7006, avg=6656.00, stdev=494.97, samples=2 00:41:11.847 lat (usec) : 750=0.01% 00:41:11.847 lat (msec) : 4=0.08%, 10=57.28%, 20=42.64% 00:41:11.847 cpu : usr=6.08%, sys=7.67%, ctx=432, majf=0, minf=1 00:41:11.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.2%, >=64=99.5% 00:41:11.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:11.847 issued rwts: total=6400,6656,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:11.847 job1: (groupid=0, jobs=1): err= 0: pid=610593: Sun Dec 15 05:41:25 2024 00:41:11.847 read: IOPS=1920, BW=7681KiB/s (7866kB/s)(7712KiB/1004msec) 00:41:11.847 slat (usec): min=3, max=26756, avg=229.44, stdev=1715.16 00:41:11.847 clat (usec): min=2245, max=67914, avg=30025.41, stdev=16609.42 00:41:11.847 lat (usec): min=5520, max=67924, avg=30254.84, stdev=16706.81 00:41:11.847 clat percentiles (usec): 00:41:11.847 | 1.00th=[ 5735], 5.00th=[10159], 10.00th=[11600], 20.00th=[14484], 00:41:11.847 | 30.00th=[15533], 40.00th=[21365], 50.00th=[30802], 60.00th=[32900], 00:41:11.847 | 70.00th=[39060], 80.00th=[44303], 90.00th=[53740], 95.00th=[65274], 00:41:11.847 | 99.00th=[67634], 99.50th=[67634], 99.90th=[67634], 99.95th=[67634], 00:41:11.847 | 99.99th=[67634] 00:41:11.847 write: IOPS=2039, BW=8159KiB/s (8355kB/s)(8192KiB/1004msec); 0 zone resets 00:41:11.847 slat (usec): min=5, max=44314, avg=265.05, stdev=2144.34 00:41:11.847 clat (usec): min=11316, max=69486, avg=29328.94, stdev=11279.34 00:41:11.847 lat (usec): min=11325, max=93249, avg=29593.99, stdev=11561.61 00:41:11.847 clat percentiles (usec): 00:41:11.847 | 1.00th=[11863], 5.00th=[11863], 10.00th=[12125], 20.00th=[12780], 00:41:11.847 | 30.00th=[25297], 40.00th=[30540], 50.00th=[32375], 60.00th=[32637], 00:41:11.847 | 70.00th=[35914], 80.00th=[38536], 90.00th=[42206], 95.00th=[46924], 00:41:11.847 | 99.00th=[50070], 99.50th=[53216], 99.90th=[61080], 99.95th=[65274], 00:41:11.847 | 99.99th=[69731] 00:41:11.847 bw ( KiB/s): min= 7760, max= 8624, per=11.54%, avg=8192.00, stdev=610.94, samples=2 00:41:11.847 iops : min= 1940, max= 2156, avg=2048.00, stdev=152.74, samples=2 00:41:11.847 lat (msec) : 4=0.03%, 10=1.91%, 20=30.61%, 50=60.39%, 100=7.07% 00:41:11.847 cpu : usr=2.69%, sys=2.59%, ctx=94, majf=0, minf=1 00:41:11.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:41:11.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:11.847 issued rwts: total=1928,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:11.847 job2: (groupid=0, jobs=1): err= 0: pid=610599: Sun Dec 15 05:41:25 2024 00:41:11.847 read: IOPS=3552, BW=13.9MiB/s (14.5MB/s)(14.0MiB/1009msec) 00:41:11.847 slat (usec): min=2, max=15056, avg=123.89, stdev=1039.50 00:41:11.847 clat (usec): min=6723, max=33392, avg=16569.13, stdev=3368.42 00:41:11.847 lat (usec): min=6732, max=33401, avg=16693.02, stdev=3476.71 00:41:11.847 clat percentiles (usec): 00:41:11.847 | 1.00th=[ 6783], 5.00th=[11338], 10.00th=[13698], 20.00th=[14615], 00:41:11.847 | 30.00th=[15533], 40.00th=[16188], 50.00th=[16450], 60.00th=[16712], 00:41:11.847 | 70.00th=[17171], 80.00th=[17957], 90.00th=[20055], 95.00th=[23987], 00:41:11.847 | 99.00th=[28181], 99.50th=[29230], 99.90th=[33424], 99.95th=[33424], 00:41:11.847 | 99.99th=[33424] 00:41:11.847 write: IOPS=4048, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1009msec); 0 zone resets 00:41:11.847 slat (usec): min=3, max=23774, avg=129.34, stdev=992.74 00:41:11.847 clat (usec): min=1512, max=55363, avg=16051.74, stdev=9914.03 00:41:11.847 lat (usec): min=1527, max=55371, avg=16181.08, stdev=9993.30 00:41:11.847 clat percentiles (usec): 00:41:11.847 | 1.00th=[ 5735], 5.00th=[ 7504], 10.00th=[ 8356], 20.00th=[10290], 00:41:11.847 | 30.00th=[11731], 40.00th=[12780], 50.00th=[13960], 60.00th=[15401], 00:41:11.847 | 70.00th=[15926], 80.00th=[16909], 90.00th=[22414], 95.00th=[45351], 00:41:11.847 | 99.00th=[53216], 99.50th=[54264], 99.90th=[55313], 99.95th=[55313], 00:41:11.847 | 99.99th=[55313] 00:41:11.847 bw ( KiB/s): min=15280, max=16384, per=22.30%, avg=15832.00, stdev=780.65, samples=2 00:41:11.847 iops : min= 3820, max= 4096, avg=3958.00, stdev=195.16, samples=2 00:41:11.847 lat (msec) : 2=0.03%, 4=0.13%, 10=10.67%, 20=77.87%, 50=9.86% 00:41:11.847 lat (msec) : 100=1.45% 00:41:11.847 cpu : usr=4.46%, sys=5.26%, ctx=177, majf=0, minf=1 00:41:11.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:41:11.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:11.847 issued rwts: total=3584,4085,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:11.847 job3: (groupid=0, jobs=1): err= 0: pid=610601: Sun Dec 15 05:41:25 2024 00:41:11.847 read: IOPS=5077, BW=19.8MiB/s (20.8MB/s)(19.9MiB/1005msec) 00:41:11.847 slat (nsec): min=1367, max=10672k, avg=96893.74, stdev=784648.73 00:41:11.847 clat (usec): min=2822, max=23420, avg=12250.46, stdev=2949.47 00:41:11.847 lat (usec): min=7050, max=28063, avg=12347.35, stdev=3028.11 00:41:11.847 clat percentiles (usec): 00:41:11.847 | 1.00th=[ 7046], 5.00th=[ 8356], 10.00th=[ 9241], 20.00th=[10421], 00:41:11.847 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11731], 00:41:11.847 | 70.00th=[12387], 80.00th=[14091], 90.00th=[17171], 95.00th=[18482], 00:41:11.847 | 99.00th=[20841], 99.50th=[21365], 99.90th=[21890], 99.95th=[21890], 00:41:11.847 | 99.99th=[23462] 00:41:11.847 write: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec); 0 zone resets 00:41:11.847 slat (usec): min=2, max=44163, avg=92.73, stdev=895.15 00:41:11.847 clat (usec): min=2759, max=45653, avg=10996.03, stdev=2353.42 00:41:11.847 lat (usec): min=2771, max=69671, avg=11088.77, stdev=2547.87 00:41:11.847 clat percentiles (usec): 00:41:11.847 | 1.00th=[ 5997], 5.00th=[ 7373], 10.00th=[ 7701], 20.00th=[ 9241], 00:41:11.847 | 30.00th=[10028], 40.00th=[10683], 50.00th=[11338], 60.00th=[11600], 00:41:11.847 | 70.00th=[11863], 80.00th=[12125], 90.00th=[12649], 95.00th=[15664], 00:41:11.847 | 99.00th=[18482], 99.50th=[18744], 99.90th=[21365], 99.95th=[21627], 00:41:11.847 | 99.99th=[45876] 00:41:11.847 bw ( KiB/s): min=20480, max=20480, per=28.85%, avg=20480.00, stdev= 0.00, samples=2 00:41:11.847 iops : min= 5120, max= 5120, avg=5120.00, stdev= 0.00, samples=2 00:41:11.847 lat (msec) : 4=0.13%, 10=22.50%, 20=76.09%, 50=1.28% 00:41:11.847 cpu : usr=3.88%, sys=6.77%, ctx=395, majf=0, minf=1 00:41:11.847 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:41:11.847 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:11.847 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:11.847 issued rwts: total=5103,5120,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:11.847 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:11.847 00:41:11.847 Run status group 0 (all jobs): 00:41:11.847 READ: bw=65.9MiB/s (69.1MB/s), 7681KiB/s-24.9MiB/s (7866kB/s-26.1MB/s), io=66.5MiB (69.7MB), run=1004-1009msec 00:41:11.847 WRITE: bw=69.3MiB/s (72.7MB/s), 8159KiB/s-25.9MiB/s (8355kB/s-27.1MB/s), io=70.0MiB (73.4MB), run=1004-1009msec 00:41:11.847 00:41:11.847 Disk stats (read/write): 00:41:11.847 nvme0n1: ios=5473/5632, merge=0/0, ticks=26666/24606, in_queue=51272, util=98.00% 00:41:11.847 nvme0n2: ios=1099/1536, merge=0/0, ticks=22940/25162, in_queue=48102, util=92.89% 00:41:11.847 nvme0n3: ios=3417/3584, merge=0/0, ticks=55914/45083, in_queue=100997, util=95.63% 00:41:11.847 nvme0n4: ios=4120/4519, merge=0/0, ticks=48664/48024, in_queue=96688, util=100.00% 00:41:11.847 05:41:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:41:11.847 [global] 00:41:11.847 thread=1 00:41:11.847 invalidate=1 00:41:11.847 rw=randwrite 00:41:11.847 time_based=1 00:41:11.847 runtime=1 00:41:11.847 ioengine=libaio 00:41:11.847 direct=1 00:41:11.847 bs=4096 00:41:11.847 iodepth=128 00:41:11.847 norandommap=0 00:41:11.847 numjobs=1 00:41:11.847 00:41:11.847 verify_dump=1 00:41:11.847 verify_backlog=512 00:41:11.847 verify_state_save=0 00:41:11.847 do_verify=1 00:41:11.847 verify=crc32c-intel 00:41:11.847 [job0] 00:41:11.847 filename=/dev/nvme0n1 00:41:11.847 [job1] 00:41:11.847 filename=/dev/nvme0n2 00:41:11.847 [job2] 00:41:11.847 filename=/dev/nvme0n3 00:41:11.847 [job3] 00:41:11.847 filename=/dev/nvme0n4 00:41:11.847 Could not set queue depth (nvme0n1) 00:41:11.847 Could not set queue depth (nvme0n2) 00:41:11.847 Could not set queue depth (nvme0n3) 00:41:11.847 Could not set queue depth (nvme0n4) 00:41:12.104 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:12.104 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:12.104 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:12.104 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:41:12.104 fio-3.35 00:41:12.104 Starting 4 threads 00:41:13.475 00:41:13.476 job0: (groupid=0, jobs=1): err= 0: pid=610960: Sun Dec 15 05:41:26 2024 00:41:13.476 read: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec) 00:41:13.476 slat (nsec): min=1762, max=9022.8k, avg=136663.11, stdev=847323.89 00:41:13.476 clat (usec): min=9609, max=31968, avg=17388.97, stdev=3639.70 00:41:13.476 lat (usec): min=9615, max=31994, avg=17525.63, stdev=3685.93 00:41:13.476 clat percentiles (usec): 00:41:13.476 | 1.00th=[10421], 5.00th=[12256], 10.00th=[13173], 20.00th=[13960], 00:41:13.476 | 30.00th=[15139], 40.00th=[15795], 50.00th=[17171], 60.00th=[17957], 00:41:13.476 | 70.00th=[19006], 80.00th=[20841], 90.00th=[22414], 95.00th=[23725], 00:41:13.476 | 99.00th=[27395], 99.50th=[27395], 99.90th=[27657], 99.95th=[27919], 00:41:13.476 | 99.99th=[31851] 00:41:13.476 write: IOPS=3809, BW=14.9MiB/s (15.6MB/s)(14.9MiB/1004msec); 0 zone resets 00:41:13.476 slat (usec): min=2, max=9451, avg=126.97, stdev=775.07 00:41:13.476 clat (usec): min=1374, max=29019, avg=16854.82, stdev=3456.35 00:41:13.476 lat (usec): min=5497, max=29041, avg=16981.79, stdev=3531.29 00:41:13.476 clat percentiles (usec): 00:41:13.476 | 1.00th=[ 5997], 5.00th=[10814], 10.00th=[12911], 20.00th=[13960], 00:41:13.476 | 30.00th=[16188], 40.00th=[16581], 50.00th=[16909], 60.00th=[17171], 00:41:13.476 | 70.00th=[17433], 80.00th=[18744], 90.00th=[21890], 95.00th=[22414], 00:41:13.476 | 99.00th=[26084], 99.50th=[26870], 99.90th=[27657], 99.95th=[28181], 00:41:13.476 | 99.99th=[28967] 00:41:13.476 bw ( KiB/s): min=13192, max=16384, per=20.53%, avg=14788.00, stdev=2257.08, samples=2 00:41:13.476 iops : min= 3298, max= 4096, avg=3697.00, stdev=564.27, samples=2 00:41:13.476 lat (msec) : 2=0.01%, 10=1.50%, 20=79.04%, 50=19.45% 00:41:13.476 cpu : usr=2.99%, sys=5.58%, ctx=293, majf=0, minf=1 00:41:13.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.1% 00:41:13.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:13.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:13.476 issued rwts: total=3584,3825,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:13.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:13.476 job1: (groupid=0, jobs=1): err= 0: pid=610963: Sun Dec 15 05:41:26 2024 00:41:13.476 read: IOPS=5615, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1003msec) 00:41:13.476 slat (nsec): min=1142, max=10799k, avg=83532.15, stdev=619980.23 00:41:13.476 clat (usec): min=4267, max=57436, avg=10978.60, stdev=3822.23 00:41:13.476 lat (usec): min=4273, max=57437, avg=11062.13, stdev=3852.61 00:41:13.476 clat percentiles (usec): 00:41:13.476 | 1.00th=[ 5997], 5.00th=[ 7439], 10.00th=[ 8094], 20.00th=[ 8979], 00:41:13.476 | 30.00th=[ 9372], 40.00th=[ 9896], 50.00th=[10159], 60.00th=[10552], 00:41:13.476 | 70.00th=[11338], 80.00th=[12518], 90.00th=[14615], 95.00th=[17171], 00:41:13.476 | 99.00th=[19530], 99.50th=[23725], 99.90th=[57410], 99.95th=[57410], 00:41:13.476 | 99.99th=[57410] 00:41:13.476 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1003msec); 0 zone resets 00:41:13.476 slat (usec): min=2, max=9874, avg=78.17, stdev=507.79 00:41:13.476 clat (usec): min=602, max=39965, avg=10645.02, stdev=5257.34 00:41:13.476 lat (usec): min=616, max=39973, avg=10723.18, stdev=5303.34 00:41:13.476 clat percentiles (usec): 00:41:13.476 | 1.00th=[ 3490], 5.00th=[ 5538], 10.00th=[ 7046], 20.00th=[ 8455], 00:41:13.476 | 30.00th=[ 9110], 40.00th=[ 9503], 50.00th=[ 9634], 60.00th=[10028], 00:41:13.476 | 70.00th=[10290], 80.00th=[10421], 90.00th=[15401], 95.00th=[17695], 00:41:13.476 | 99.00th=[38011], 99.50th=[39060], 99.90th=[39584], 99.95th=[39584], 00:41:13.476 | 99.99th=[40109] 00:41:13.476 bw ( KiB/s): min=23992, max=24048, per=33.34%, avg=24020.00, stdev=39.60, samples=2 00:41:13.476 iops : min= 5998, max= 6012, avg=6005.00, stdev= 9.90, samples=2 00:41:13.476 lat (usec) : 750=0.03% 00:41:13.476 lat (msec) : 2=0.03%, 4=0.54%, 10=53.00%, 20=43.78%, 50=2.47% 00:41:13.476 lat (msec) : 100=0.15% 00:41:13.476 cpu : usr=3.89%, sys=7.19%, ctx=434, majf=0, minf=2 00:41:13.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:41:13.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:13.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:13.476 issued rwts: total=5632,6132,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:13.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:13.476 job2: (groupid=0, jobs=1): err= 0: pid=610964: Sun Dec 15 05:41:26 2024 00:41:13.476 read: IOPS=2547, BW=9.95MiB/s (10.4MB/s)(10.0MiB/1005msec) 00:41:13.476 slat (nsec): min=1811, max=11539k, avg=213172.66, stdev=1154539.35 00:41:13.476 clat (usec): min=8516, max=40770, avg=25923.46, stdev=5200.80 00:41:13.476 lat (usec): min=8535, max=40796, avg=26136.63, stdev=5289.40 00:41:13.476 clat percentiles (usec): 00:41:13.476 | 1.00th=[14091], 5.00th=[16057], 10.00th=[17957], 20.00th=[21627], 00:41:13.476 | 30.00th=[23725], 40.00th=[25035], 50.00th=[26870], 60.00th=[27657], 00:41:13.476 | 70.00th=[28967], 80.00th=[30540], 90.00th=[31589], 95.00th=[33162], 00:41:13.476 | 99.00th=[37487], 99.50th=[38536], 99.90th=[39060], 99.95th=[39060], 00:41:13.476 | 99.99th=[40633] 00:41:13.476 write: IOPS=2595, BW=10.1MiB/s (10.6MB/s)(10.2MiB/1005msec); 0 zone resets 00:41:13.476 slat (usec): min=2, max=11774, avg=167.50, stdev=893.99 00:41:13.476 clat (usec): min=2743, max=52218, avg=23063.10, stdev=6633.57 00:41:13.476 lat (usec): min=6610, max=52226, avg=23230.60, stdev=6703.17 00:41:13.476 clat percentiles (usec): 00:41:13.476 | 1.00th=[ 8160], 5.00th=[16188], 10.00th=[16581], 20.00th=[18482], 00:41:13.476 | 30.00th=[20055], 40.00th=[21103], 50.00th=[21890], 60.00th=[22938], 00:41:13.476 | 70.00th=[25297], 80.00th=[26084], 90.00th=[30016], 95.00th=[36439], 00:41:13.476 | 99.00th=[49546], 99.50th=[50594], 99.90th=[52167], 99.95th=[52167], 00:41:13.476 | 99.99th=[52167] 00:41:13.476 bw ( KiB/s): min= 8288, max=12192, per=14.21%, avg=10240.00, stdev=2760.54, samples=2 00:41:13.476 iops : min= 2072, max= 3048, avg=2560.00, stdev=690.14, samples=2 00:41:13.476 lat (msec) : 4=0.02%, 10=0.95%, 20=20.78%, 50=77.98%, 100=0.27% 00:41:13.476 cpu : usr=2.29%, sys=3.88%, ctx=267, majf=0, minf=1 00:41:13.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:41:13.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:13.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:13.476 issued rwts: total=2560,2608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:13.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:13.476 job3: (groupid=0, jobs=1): err= 0: pid=610965: Sun Dec 15 05:41:26 2024 00:41:13.476 read: IOPS=5094, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1005msec) 00:41:13.476 slat (nsec): min=1652, max=10328k, avg=91418.81, stdev=721091.49 00:41:13.476 clat (usec): min=2824, max=23112, avg=11756.96, stdev=2977.36 00:41:13.476 lat (usec): min=2829, max=28101, avg=11848.38, stdev=3043.88 00:41:13.476 clat percentiles (usec): 00:41:13.476 | 1.00th=[ 5800], 5.00th=[ 7963], 10.00th=[ 9372], 20.00th=[10159], 00:41:13.476 | 30.00th=[10683], 40.00th=[10945], 50.00th=[11076], 60.00th=[11338], 00:41:13.476 | 70.00th=[11731], 80.00th=[13042], 90.00th=[15926], 95.00th=[18744], 00:41:13.476 | 99.00th=[21103], 99.50th=[22152], 99.90th=[22152], 99.95th=[23200], 00:41:13.476 | 99.99th=[23200] 00:41:13.476 write: IOPS=5507, BW=21.5MiB/s (22.6MB/s)(21.6MiB/1005msec); 0 zone resets 00:41:13.476 slat (usec): min=2, max=10278, avg=88.27, stdev=597.71 00:41:13.476 clat (usec): min=1191, max=30359, avg=12114.50, stdev=3693.59 00:41:13.476 lat (usec): min=1210, max=30362, avg=12202.77, stdev=3725.71 00:41:13.476 clat percentiles (usec): 00:41:13.476 | 1.00th=[ 6980], 5.00th=[ 7701], 10.00th=[ 8455], 20.00th=[ 9634], 00:41:13.476 | 30.00th=[10421], 40.00th=[11076], 50.00th=[11469], 60.00th=[11600], 00:41:13.476 | 70.00th=[12256], 80.00th=[14484], 90.00th=[16909], 95.00th=[18744], 00:41:13.476 | 99.00th=[26608], 99.50th=[28181], 99.90th=[30278], 99.95th=[30278], 00:41:13.476 | 99.99th=[30278] 00:41:13.476 bw ( KiB/s): min=20936, max=22328, per=30.03%, avg=21632.00, stdev=984.29, samples=2 00:41:13.476 iops : min= 5234, max= 5582, avg=5408.00, stdev=246.07, samples=2 00:41:13.476 lat (msec) : 2=0.03%, 4=0.22%, 10=21.21%, 20=75.33%, 50=3.22% 00:41:13.476 cpu : usr=3.09%, sys=7.77%, ctx=358, majf=0, minf=2 00:41:13.476 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:41:13.476 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:13.476 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:41:13.476 issued rwts: total=5120,5535,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:13.476 latency : target=0, window=0, percentile=100.00%, depth=128 00:41:13.476 00:41:13.476 Run status group 0 (all jobs): 00:41:13.476 READ: bw=65.7MiB/s (68.9MB/s), 9.95MiB/s-21.9MiB/s (10.4MB/s-23.0MB/s), io=66.0MiB (69.2MB), run=1003-1005msec 00:41:13.476 WRITE: bw=70.4MiB/s (73.8MB/s), 10.1MiB/s-23.9MiB/s (10.6MB/s-25.0MB/s), io=70.7MiB (74.1MB), run=1003-1005msec 00:41:13.476 00:41:13.476 Disk stats (read/write): 00:41:13.476 nvme0n1: ios=3092/3088, merge=0/0, ticks=27609/25086, in_queue=52695, util=85.77% 00:41:13.476 nvme0n2: ios=4893/5120, merge=0/0, ticks=40258/38206, in_queue=78464, util=89.84% 00:41:13.476 nvme0n3: ios=2073/2372, merge=0/0, ticks=18974/16605, in_queue=35579, util=93.55% 00:41:13.476 nvme0n4: ios=4665/4705, merge=0/0, ticks=45815/42620, in_queue=88435, util=95.70% 00:41:13.476 05:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:41:13.476 05:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=611186 00:41:13.476 05:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:41:13.476 05:41:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:41:13.476 [global] 00:41:13.476 thread=1 00:41:13.476 invalidate=1 00:41:13.476 rw=read 00:41:13.476 time_based=1 00:41:13.476 runtime=10 00:41:13.476 ioengine=libaio 00:41:13.476 direct=1 00:41:13.476 bs=4096 00:41:13.476 iodepth=1 00:41:13.476 norandommap=1 00:41:13.476 numjobs=1 00:41:13.476 00:41:13.476 [job0] 00:41:13.476 filename=/dev/nvme0n1 00:41:13.476 [job1] 00:41:13.476 filename=/dev/nvme0n2 00:41:13.476 [job2] 00:41:13.476 filename=/dev/nvme0n3 00:41:13.476 [job3] 00:41:13.476 filename=/dev/nvme0n4 00:41:13.476 Could not set queue depth (nvme0n1) 00:41:13.476 Could not set queue depth (nvme0n2) 00:41:13.476 Could not set queue depth (nvme0n3) 00:41:13.476 Could not set queue depth (nvme0n4) 00:41:13.734 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:13.734 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:13.734 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:13.734 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:41:13.734 fio-3.35 00:41:13.734 Starting 4 threads 00:41:16.257 05:41:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:41:16.515 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=29724672, buflen=4096 00:41:16.515 fio: pid=611333, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:16.515 05:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:41:16.773 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45334528, buflen=4096 00:41:16.773 fio: pid=611332, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:16.773 05:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:16.773 05:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:41:17.030 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=43995136, buflen=4096 00:41:17.030 fio: pid=611330, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:41:17.030 05:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:17.030 05:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:41:17.288 fio: io_u error on file /dev/nvme0n2: Input/output error: read offset=348160, buflen=4096 00:41:17.288 fio: pid=611331, err=5/file:io_u.c:1889, func=io_u error, error=Input/output error 00:41:17.288 05:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:17.288 05:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:41:17.288 00:41:17.288 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=611330: Sun Dec 15 05:41:30 2024 00:41:17.288 read: IOPS=3428, BW=13.4MiB/s (14.0MB/s)(42.0MiB/3133msec) 00:41:17.288 slat (usec): min=2, max=16788, avg= 8.36, stdev=161.93 00:41:17.288 clat (usec): min=175, max=41194, avg=280.29, stdev=1361.36 00:41:17.288 lat (usec): min=178, max=57983, avg=288.66, stdev=1417.17 00:41:17.288 clat percentiles (usec): 00:41:17.288 | 1.00th=[ 184], 5.00th=[ 194], 10.00th=[ 204], 20.00th=[ 215], 00:41:17.288 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 233], 00:41:17.288 | 70.00th=[ 239], 80.00th=[ 247], 90.00th=[ 269], 95.00th=[ 306], 00:41:17.288 | 99.00th=[ 375], 99.50th=[ 375], 99.90th=[40633], 99.95th=[41157], 00:41:17.288 | 99.99th=[41157] 00:41:17.288 bw ( KiB/s): min= 6981, max=17544, per=40.43%, avg=14038.17, stdev=3958.60, samples=6 00:41:17.288 iops : min= 1745, max= 4386, avg=3509.50, stdev=989.74, samples=6 00:41:17.288 lat (usec) : 250=82.82%, 500=16.98%, 750=0.04% 00:41:17.288 lat (msec) : 2=0.01%, 4=0.03%, 50=0.11% 00:41:17.288 cpu : usr=0.89%, sys=2.84%, ctx=10743, majf=0, minf=2 00:41:17.288 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:17.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.288 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.288 issued rwts: total=10742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.288 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:17.288 job1: (groupid=0, jobs=1): err= 5 (file:io_u.c:1889, func=io_u error, error=Input/output error): pid=611331: Sun Dec 15 05:41:30 2024 00:41:17.288 read: IOPS=25, BW=101KiB/s (104kB/s)(340KiB/3358msec) 00:41:17.288 slat (usec): min=8, max=11746, avg=357.30, stdev=1846.50 00:41:17.288 clat (usec): min=257, max=42303, avg=39137.59, stdev=8679.21 00:41:17.288 lat (usec): min=278, max=52991, avg=39417.13, stdev=8923.28 00:41:17.288 clat percentiles (usec): 00:41:17.288 | 1.00th=[ 258], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:41:17.288 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:41:17.288 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:41:17.288 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:41:17.288 | 99.99th=[42206] 00:41:17.288 bw ( KiB/s): min= 96, max= 112, per=0.29%, avg=101.83, stdev= 7.96, samples=6 00:41:17.288 iops : min= 24, max= 28, avg=25.33, stdev= 2.07, samples=6 00:41:17.288 lat (usec) : 500=3.49%, 750=1.16% 00:41:17.288 lat (msec) : 50=94.19% 00:41:17.288 cpu : usr=0.00%, sys=0.24%, ctx=91, majf=0, minf=2 00:41:17.288 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:17.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.289 complete : 0=1.1%, 4=98.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.289 issued rwts: total=86,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:17.289 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=611332: Sun Dec 15 05:41:30 2024 00:41:17.289 read: IOPS=3782, BW=14.8MiB/s (15.5MB/s)(43.2MiB/2926msec) 00:41:17.289 slat (usec): min=6, max=15075, avg=10.02, stdev=180.43 00:41:17.289 clat (usec): min=177, max=41044, avg=250.85, stdev=747.56 00:41:17.289 lat (usec): min=185, max=41070, avg=260.87, stdev=770.00 00:41:17.289 clat percentiles (usec): 00:41:17.289 | 1.00th=[ 198], 5.00th=[ 202], 10.00th=[ 208], 20.00th=[ 219], 00:41:17.289 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 233], 60.00th=[ 237], 00:41:17.289 | 70.00th=[ 243], 80.00th=[ 251], 90.00th=[ 269], 95.00th=[ 289], 00:41:17.289 | 99.00th=[ 318], 99.50th=[ 363], 99.90th=[ 429], 99.95th=[ 725], 00:41:17.289 | 99.99th=[40633] 00:41:17.289 bw ( KiB/s): min=11072, max=16824, per=43.79%, avg=15206.40, stdev=2341.52, samples=5 00:41:17.289 iops : min= 2768, max= 4206, avg=3801.60, stdev=585.38, samples=5 00:41:17.289 lat (usec) : 250=79.86%, 500=20.06%, 750=0.03% 00:41:17.289 lat (msec) : 4=0.01%, 50=0.04% 00:41:17.289 cpu : usr=2.09%, sys=4.31%, ctx=11072, majf=0, minf=1 00:41:17.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:17.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.289 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.289 issued rwts: total=11069,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:17.289 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=611333: Sun Dec 15 05:41:30 2024 00:41:17.289 read: IOPS=2666, BW=10.4MiB/s (10.9MB/s)(28.3MiB/2722msec) 00:41:17.289 slat (nsec): min=6228, max=39722, avg=8148.58, stdev=1478.07 00:41:17.289 clat (usec): min=206, max=41430, avg=361.23, stdev=2091.31 00:41:17.289 lat (usec): min=214, max=41443, avg=369.38, stdev=2092.01 00:41:17.289 clat percentiles (usec): 00:41:17.289 | 1.00th=[ 221], 5.00th=[ 227], 10.00th=[ 231], 20.00th=[ 237], 00:41:17.289 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 249], 00:41:17.289 | 70.00th=[ 251], 80.00th=[ 258], 90.00th=[ 273], 95.00th=[ 306], 00:41:17.289 | 99.00th=[ 383], 99.50th=[ 469], 99.90th=[41157], 99.95th=[41157], 00:41:17.289 | 99.99th=[41681] 00:41:17.289 bw ( KiB/s): min= 104, max=15768, per=30.57%, avg=10616.00, stdev=6775.92, samples=5 00:41:17.289 iops : min= 26, max= 3942, avg=2654.00, stdev=1693.98, samples=5 00:41:17.289 lat (usec) : 250=66.04%, 500=33.60%, 750=0.06% 00:41:17.289 lat (msec) : 2=0.01%, 20=0.01%, 50=0.26% 00:41:17.289 cpu : usr=2.09%, sys=3.53%, ctx=7258, majf=0, minf=2 00:41:17.289 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:41:17.289 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.289 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:41:17.289 issued rwts: total=7258,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:41:17.289 latency : target=0, window=0, percentile=100.00%, depth=1 00:41:17.289 00:41:17.289 Run status group 0 (all jobs): 00:41:17.289 READ: bw=33.9MiB/s (35.6MB/s), 101KiB/s-14.8MiB/s (104kB/s-15.5MB/s), io=114MiB (119MB), run=2722-3358msec 00:41:17.289 00:41:17.289 Disk stats (read/write): 00:41:17.289 nvme0n1: ios=10716/0, merge=0/0, ticks=3365/0, in_queue=3365, util=95.59% 00:41:17.289 nvme0n2: ios=122/0, merge=0/0, ticks=4276/0, in_queue=4276, util=99.17% 00:41:17.289 nvme0n3: ios=10849/0, merge=0/0, ticks=2633/0, in_queue=2633, util=95.64% 00:41:17.289 nvme0n4: ios=6976/0, merge=0/0, ticks=2455/0, in_queue=2455, util=96.48% 00:41:17.289 05:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:17.289 05:41:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:41:17.546 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:17.547 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:41:17.804 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:17.804 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:41:18.061 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:41:18.061 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 611186 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:18.318 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:41:18.318 nvmf hotplug test: fio failed as expected 00:41:18.318 05:41:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@121 -- # sync 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@124 -- # set +e 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:18.576 rmmod nvme_tcp 00:41:18.576 rmmod nvme_fabrics 00:41:18.576 rmmod nvme_keyring 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@128 -- # set -e 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # return 0 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@517 -- # '[' -n 608610 ']' 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@518 -- # killprocess 608610 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 608610 ']' 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 608610 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 608610 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:18.576 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:18.577 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 608610' 00:41:18.577 killing process with pid 608610 00:41:18.577 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 608610 00:41:18.577 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 608610 00:41:18.835 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:18.836 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:18.836 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:18.836 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@297 -- # iptr 00:41:18.836 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-save 00:41:18.836 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:18.836 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@791 -- # iptables-restore 00:41:18.836 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:18.836 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:18.836 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:18.836 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:18.836 05:41:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:21.370 00:41:21.370 real 0m25.693s 00:41:21.370 user 1m30.777s 00:41:21.370 sys 0m11.096s 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:41:21.370 ************************************ 00:41:21.370 END TEST nvmf_fio_target 00:41:21.370 ************************************ 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:21.370 ************************************ 00:41:21.370 START TEST nvmf_bdevio 00:41:21.370 ************************************ 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:41:21.370 * Looking for test storage... 00:41:21.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lcov --version 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:21.370 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:21.370 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.370 --rc genhtml_branch_coverage=1 00:41:21.370 --rc genhtml_function_coverage=1 00:41:21.370 --rc genhtml_legend=1 00:41:21.370 --rc geninfo_all_blocks=1 00:41:21.370 --rc geninfo_unexecuted_blocks=1 00:41:21.371 00:41:21.371 ' 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:21.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.371 --rc genhtml_branch_coverage=1 00:41:21.371 --rc genhtml_function_coverage=1 00:41:21.371 --rc genhtml_legend=1 00:41:21.371 --rc geninfo_all_blocks=1 00:41:21.371 --rc geninfo_unexecuted_blocks=1 00:41:21.371 00:41:21.371 ' 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:21.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.371 --rc genhtml_branch_coverage=1 00:41:21.371 --rc genhtml_function_coverage=1 00:41:21.371 --rc genhtml_legend=1 00:41:21.371 --rc geninfo_all_blocks=1 00:41:21.371 --rc geninfo_unexecuted_blocks=1 00:41:21.371 00:41:21.371 ' 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:21.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:21.371 --rc genhtml_branch_coverage=1 00:41:21.371 --rc genhtml_function_coverage=1 00:41:21.371 --rc genhtml_legend=1 00:41:21.371 --rc geninfo_all_blocks=1 00:41:21.371 --rc geninfo_unexecuted_blocks=1 00:41:21.371 00:41:21.371 ' 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # : 0 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # xtrace_disable 00:41:21.371 05:41:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # pci_devs=() 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # net_devs=() 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # e810=() 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@320 -- # local -ga e810 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # x722=() 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # local -ga x722 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # mlx=() 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@322 -- # local -ga mlx 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:27.942 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:27.942 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:27.942 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:27.943 Found net devices under 0000:af:00.0: cvl_0_0 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:27.943 Found net devices under 0000:af:00.1: cvl_0_1 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@442 -- # is_hw=yes 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:27.943 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:27.943 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.324 ms 00:41:27.943 00:41:27.943 --- 10.0.0.2 ping statistics --- 00:41:27.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.943 rtt min/avg/max/mdev = 0.324/0.324/0.324/0.000 ms 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:27.943 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:27.943 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.186 ms 00:41:27.943 00:41:27.943 --- 10.0.0.1 ping statistics --- 00:41:27.943 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:27.943 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@450 -- # return 0 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@509 -- # nvmfpid=615491 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@510 -- # waitforlisten 615491 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 615491 ']' 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:27.943 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:27.943 [2024-12-15 05:41:40.697317] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:27.943 [2024-12-15 05:41:40.698225] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:27.943 [2024-12-15 05:41:40.698259] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:27.943 [2024-12-15 05:41:40.775944] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:41:27.943 [2024-12-15 05:41:40.799156] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:27.943 [2024-12-15 05:41:40.799193] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:27.943 [2024-12-15 05:41:40.799200] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:27.943 [2024-12-15 05:41:40.799207] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:27.943 [2024-12-15 05:41:40.799213] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:27.943 [2024-12-15 05:41:40.800707] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:41:27.943 [2024-12-15 05:41:40.800814] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 5 00:41:27.943 [2024-12-15 05:41:40.800923] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:41:27.943 [2024-12-15 05:41:40.800924] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 6 00:41:27.943 [2024-12-15 05:41:40.863469] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:27.943 [2024-12-15 05:41:40.864590] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:27.943 [2024-12-15 05:41:40.864642] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:41:27.943 [2024-12-15 05:41:40.865138] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:27.943 [2024-12-15 05:41:40.865163] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:27.943 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:27.944 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:27.944 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:27.944 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:41:27.944 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.944 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:27.944 [2024-12-15 05:41:40.933684] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:27.944 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.944 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:41:27.944 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.944 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:27.944 Malloc0 00:41:27.944 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.944 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:41:27.944 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.944 05:41:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:27.944 [2024-12-15 05:41:41.025933] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # config=() 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@560 -- # local subsystem config 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:41:27.944 { 00:41:27.944 "params": { 00:41:27.944 "name": "Nvme$subsystem", 00:41:27.944 "trtype": "$TEST_TRANSPORT", 00:41:27.944 "traddr": "$NVMF_FIRST_TARGET_IP", 00:41:27.944 "adrfam": "ipv4", 00:41:27.944 "trsvcid": "$NVMF_PORT", 00:41:27.944 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:41:27.944 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:41:27.944 "hdgst": ${hdgst:-false}, 00:41:27.944 "ddgst": ${ddgst:-false} 00:41:27.944 }, 00:41:27.944 "method": "bdev_nvme_attach_controller" 00:41:27.944 } 00:41:27.944 EOF 00:41:27.944 )") 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@582 -- # cat 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@584 -- # jq . 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@585 -- # IFS=, 00:41:27.944 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:41:27.944 "params": { 00:41:27.944 "name": "Nvme1", 00:41:27.944 "trtype": "tcp", 00:41:27.944 "traddr": "10.0.0.2", 00:41:27.944 "adrfam": "ipv4", 00:41:27.944 "trsvcid": "4420", 00:41:27.944 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:41:27.944 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:41:27.944 "hdgst": false, 00:41:27.944 "ddgst": false 00:41:27.944 }, 00:41:27.944 "method": "bdev_nvme_attach_controller" 00:41:27.944 }' 00:41:27.944 [2024-12-15 05:41:41.076830] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:27.944 [2024-12-15 05:41:41.076874] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid615589 ] 00:41:27.944 [2024-12-15 05:41:41.152707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:41:27.944 [2024-12-15 05:41:41.177553] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:27.944 [2024-12-15 05:41:41.177662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:27.944 [2024-12-15 05:41:41.177662] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:41:27.944 I/O targets: 00:41:27.944 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:41:27.944 00:41:27.944 00:41:27.944 CUnit - A unit testing framework for C - Version 2.1-3 00:41:27.944 http://cunit.sourceforge.net/ 00:41:27.944 00:41:27.944 00:41:27.944 Suite: bdevio tests on: Nvme1n1 00:41:27.944 Test: blockdev write read block ...passed 00:41:27.944 Test: blockdev write zeroes read block ...passed 00:41:27.944 Test: blockdev write zeroes read no split ...passed 00:41:27.944 Test: blockdev write zeroes read split ...passed 00:41:27.944 Test: blockdev write zeroes read split partial ...passed 00:41:27.944 Test: blockdev reset ...[2024-12-15 05:41:41.553083] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:41:27.944 [2024-12-15 05:41:41.553143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23b2630 (9): Bad file descriptor 00:41:27.944 [2024-12-15 05:41:41.556615] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:41:27.944 passed 00:41:27.944 Test: blockdev write read 8 blocks ...passed 00:41:27.944 Test: blockdev write read size > 128k ...passed 00:41:27.944 Test: blockdev write read invalid size ...passed 00:41:27.944 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:41:27.944 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:41:27.944 Test: blockdev write read max offset ...passed 00:41:28.202 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:41:28.202 Test: blockdev writev readv 8 blocks ...passed 00:41:28.202 Test: blockdev writev readv 30 x 1block ...passed 00:41:28.202 Test: blockdev writev readv block ...passed 00:41:28.202 Test: blockdev writev readv size > 128k ...passed 00:41:28.202 Test: blockdev writev readv size > 128k in two iovs ...passed 00:41:28.202 Test: blockdev comparev and writev ...[2024-12-15 05:41:41.728271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.202 [2024-12-15 05:41:41.728305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:41:28.202 [2024-12-15 05:41:41.728319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.202 [2024-12-15 05:41:41.728327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:41:28.202 [2024-12-15 05:41:41.728618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.202 [2024-12-15 05:41:41.728629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:41:28.202 [2024-12-15 05:41:41.728641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.202 [2024-12-15 05:41:41.728648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:41:28.202 [2024-12-15 05:41:41.728930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.202 [2024-12-15 05:41:41.728942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:41:28.202 [2024-12-15 05:41:41.728953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.202 [2024-12-15 05:41:41.728961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:41:28.202 [2024-12-15 05:41:41.729251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.202 [2024-12-15 05:41:41.729267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:41:28.203 [2024-12-15 05:41:41.729279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:41:28.203 [2024-12-15 05:41:41.729287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:41:28.203 passed 00:41:28.203 Test: blockdev nvme passthru rw ...passed 00:41:28.203 Test: blockdev nvme passthru vendor specific ...[2024-12-15 05:41:41.811468] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:28.203 [2024-12-15 05:41:41.811486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:41:28.203 [2024-12-15 05:41:41.811593] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:28.203 [2024-12-15 05:41:41.811603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:41:28.203 [2024-12-15 05:41:41.811722] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:28.203 [2024-12-15 05:41:41.811732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:41:28.203 [2024-12-15 05:41:41.811848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:41:28.203 [2024-12-15 05:41:41.811858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:41:28.203 passed 00:41:28.203 Test: blockdev nvme admin passthru ...passed 00:41:28.203 Test: blockdev copy ...passed 00:41:28.203 00:41:28.203 Run Summary: Type Total Ran Passed Failed Inactive 00:41:28.203 suites 1 1 n/a 0 0 00:41:28.203 tests 23 23 23 0 0 00:41:28.203 asserts 152 152 152 0 n/a 00:41:28.203 00:41:28.203 Elapsed time = 0.929 seconds 00:41:28.462 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:41:28.462 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:28.462 05:41:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@121 -- # sync 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@124 -- # set +e 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:28.462 rmmod nvme_tcp 00:41:28.462 rmmod nvme_fabrics 00:41:28.462 rmmod nvme_keyring 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@128 -- # set -e 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # return 0 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@517 -- # '[' -n 615491 ']' 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@518 -- # killprocess 615491 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 615491 ']' 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 615491 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 615491 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 615491' 00:41:28.462 killing process with pid 615491 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 615491 00:41:28.462 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 615491 00:41:28.721 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:28.721 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:28.721 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:28.721 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@297 -- # iptr 00:41:28.721 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-save 00:41:28.721 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:28.721 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@791 -- # iptables-restore 00:41:28.721 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:28.721 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:28.721 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:28.721 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:41:28.721 05:41:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:31.258 05:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:31.258 00:41:31.258 real 0m9.845s 00:41:31.258 user 0m8.179s 00:41:31.258 sys 0m5.240s 00:41:31.258 05:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:31.258 05:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:41:31.258 ************************************ 00:41:31.258 END TEST nvmf_bdevio 00:41:31.258 ************************************ 00:41:31.258 05:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:41:31.258 00:41:31.258 real 4m30.406s 00:41:31.258 user 9m2.440s 00:41:31.258 sys 1m50.664s 00:41:31.258 05:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:31.258 05:41:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:41:31.258 ************************************ 00:41:31.258 END TEST nvmf_target_core_interrupt_mode 00:41:31.258 ************************************ 00:41:31.258 05:41:44 nvmf_tcp -- nvmf/nvmf.sh@21 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:31.258 05:41:44 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:41:31.258 05:41:44 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:31.258 05:41:44 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:31.258 ************************************ 00:41:31.258 START TEST nvmf_interrupt 00:41:31.258 ************************************ 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:41:31.258 * Looking for test storage... 00:41:31.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lcov --version 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:31.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.258 --rc genhtml_branch_coverage=1 00:41:31.258 --rc genhtml_function_coverage=1 00:41:31.258 --rc genhtml_legend=1 00:41:31.258 --rc geninfo_all_blocks=1 00:41:31.258 --rc geninfo_unexecuted_blocks=1 00:41:31.258 00:41:31.258 ' 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:31.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.258 --rc genhtml_branch_coverage=1 00:41:31.258 --rc genhtml_function_coverage=1 00:41:31.258 --rc genhtml_legend=1 00:41:31.258 --rc geninfo_all_blocks=1 00:41:31.258 --rc geninfo_unexecuted_blocks=1 00:41:31.258 00:41:31.258 ' 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:31.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.258 --rc genhtml_branch_coverage=1 00:41:31.258 --rc genhtml_function_coverage=1 00:41:31.258 --rc genhtml_legend=1 00:41:31.258 --rc geninfo_all_blocks=1 00:41:31.258 --rc geninfo_unexecuted_blocks=1 00:41:31.258 00:41:31.258 ' 00:41:31.258 05:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:31.259 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:31.259 --rc genhtml_branch_coverage=1 00:41:31.259 --rc genhtml_function_coverage=1 00:41:31.259 --rc genhtml_legend=1 00:41:31.259 --rc geninfo_all_blocks=1 00:41:31.259 --rc geninfo_unexecuted_blocks=1 00:41:31.259 00:41:31.259 ' 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # : 0 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@33 -- # '[' 1 -eq 1 ']' 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@34 -- # NVMF_APP+=(--interrupt-mode) 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@476 -- # prepare_net_devs 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@438 -- # local -g is_hw=no 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@440 -- # remove_spdk_ns 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # xtrace_disable 00:41:31.259 05:41:44 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # pci_devs=() 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # local -a pci_devs 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # pci_net_devs=() 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # pci_drivers=() 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@317 -- # local -A pci_drivers 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # net_devs=() 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@319 -- # local -ga net_devs 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # e810=() 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@320 -- # local -ga e810 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # x722=() 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # local -ga x722 00:41:36.716 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # mlx=() 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@322 -- # local -ga mlx 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:41:36.717 Found 0000:af:00.0 (0x8086 - 0x159b) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:41:36.717 Found 0000:af:00.1 (0x8086 - 0x159b) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:41:36.717 Found net devices under 0000:af:00.0: cvl_0_0 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@418 -- # [[ up == up ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:41:36.717 Found net devices under 0000:af:00.1: cvl_0_1 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@442 -- # is_hw=yes 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:41:36.717 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:41:36.976 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:41:36.976 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.402 ms 00:41:36.976 00:41:36.976 --- 10.0.0.2 ping statistics --- 00:41:36.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:36.976 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:41:36.976 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:41:36.976 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.219 ms 00:41:36.976 00:41:36.976 --- 10.0.0.1 ping statistics --- 00:41:36.976 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:41:36.976 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@450 -- # return 0 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@509 -- # nvmfpid=619220 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@510 -- # waitforlisten 619220 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 619220 ']' 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:36.976 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:36.976 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:36.976 [2024-12-15 05:41:50.646806] thread.c:3079:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:41:36.976 [2024-12-15 05:41:50.647689] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:36.976 [2024-12-15 05:41:50.647721] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:41:37.235 [2024-12-15 05:41:50.724672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:37.235 [2024-12-15 05:41:50.747486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:41:37.235 [2024-12-15 05:41:50.747521] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:41:37.235 [2024-12-15 05:41:50.747528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:41:37.235 [2024-12-15 05:41:50.747534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:41:37.235 [2024-12-15 05:41:50.747539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:41:37.235 [2024-12-15 05:41:50.748645] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:37.235 [2024-12-15 05:41:50.748646] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:37.235 [2024-12-15 05:41:50.811238] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:41:37.235 [2024-12-15 05:41:50.811718] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:41:37.235 [2024-12-15 05:41:50.811973] thread.c:2144:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:41:37.235 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:37.235 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:41:37.235 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:41:37.235 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:37.235 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:37.235 05:41:50 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:41:37.235 05:41:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:41:37.235 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:41:37.235 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:41:37.235 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:41:37.235 5000+0 records in 00:41:37.235 5000+0 records out 00:41:37.235 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0180602 s, 567 MB/s 00:41:37.235 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:41:37.236 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.236 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:37.494 AIO0 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:37.494 [2024-12-15 05:41:50.945478] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:37.494 [2024-12-15 05:41:50.985726] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 619220 0 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619220 0 idle 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619220 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:37.494 05:41:50 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619220 -w 256 00:41:37.494 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619220 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.22 reactor_0' 00:41:37.494 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619220 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.22 reactor_0 00:41:37.494 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:37.494 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 619220 1 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619220 1 idle 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619220 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619220 -w 256 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619224 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.00 reactor_1' 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619224 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.00 reactor_1 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=619377 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 619220 0 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 619220 0 busy 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619220 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619220 -w 256 00:41:37.753 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:38.011 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619220 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.23 reactor_0' 00:41:38.012 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619220 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:00.23 reactor_0 00:41:38.012 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:38.012 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:38.012 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:38.012 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:38.012 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:38.012 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:38.012 05:41:51 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:41:38.945 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:41:38.945 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:38.945 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619220 -w 256 00:41:38.945 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:39.203 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619220 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:02.53 reactor_0' 00:41:39.203 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619220 root 20 0 128.2g 46848 33792 R 99.9 0.1 0:02.53 reactor_0 00:41:39.203 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:39.203 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 619220 1 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 619220 1 busy 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619220 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619220 -w 256 00:41:39.204 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:39.462 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619224 root 20 0 128.2g 46848 33792 R 87.5 0.1 0:01.33 reactor_1' 00:41:39.462 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619224 root 20 0 128.2g 46848 33792 R 87.5 0.1 0:01.33 reactor_1 00:41:39.462 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:39.462 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:39.462 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=87.5 00:41:39.462 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=87 00:41:39.462 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:41:39.462 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:41:39.462 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:41:39.462 05:41:52 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:39.462 05:41:52 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 619377 00:41:49.431 Initializing NVMe Controllers 00:41:49.431 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:41:49.431 Controller IO queue size 256, less than required. 00:41:49.431 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:41:49.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:41:49.431 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:41:49.432 Initialization complete. Launching workers. 00:41:49.432 ======================================================== 00:41:49.432 Latency(us) 00:41:49.432 Device Information : IOPS MiB/s Average min max 00:41:49.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16877.29 65.93 15175.94 2960.29 31285.87 00:41:49.432 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 17048.79 66.60 15021.46 7401.53 56594.54 00:41:49.432 ======================================================== 00:41:49.432 Total : 33926.08 132.52 15098.31 2960.29 56594.54 00:41:49.432 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 619220 0 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619220 0 idle 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619220 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619220 -w 256 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619220 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.22 reactor_0' 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619220 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:20.22 reactor_0 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 619220 1 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619220 1 idle 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619220 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619220 -w 256 00:41:49.432 05:42:01 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619224 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1' 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619224 root 20 0 128.2g 46848 33792 S 0.0 0.1 0:10.00 reactor_1 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:41:49.432 05:42:02 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 619220 0 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619220 0 idle 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619220 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619220 -w 256 00:41:50.810 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619220 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.46 reactor_0' 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619220 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:20.46 reactor_0 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 619220 1 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 619220 1 idle 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=619220 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 619220 -w 256 00:41:51.072 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor=' 619224 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.09 reactor_1' 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 619224 root 20 0 128.2g 72960 33792 S 0.0 0.1 0:10.09 reactor_1 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:41:51.332 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@516 -- # nvmfcleanup 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@121 -- # sync 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@124 -- # set +e 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # for i in {1..20} 00:41:51.332 05:42:04 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:41:51.332 rmmod nvme_tcp 00:41:51.332 rmmod nvme_fabrics 00:41:51.332 rmmod nvme_keyring 00:41:51.332 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:41:51.332 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@128 -- # set -e 00:41:51.332 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # return 0 00:41:51.332 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@517 -- # '[' -n 619220 ']' 00:41:51.332 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@518 -- # killprocess 619220 00:41:51.332 05:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 619220 ']' 00:41:51.332 05:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 619220 00:41:51.332 05:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:41:51.590 05:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:41:51.590 05:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 619220 00:41:51.590 05:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:41:51.590 05:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:41:51.590 05:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 619220' 00:41:51.590 killing process with pid 619220 00:41:51.590 05:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 619220 00:41:51.590 05:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 619220 00:41:51.849 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:41:51.849 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:41:51.849 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:41:51.849 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@297 -- # iptr 00:41:51.849 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-save 00:41:51.849 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:41:51.849 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@791 -- # iptables-restore 00:41:51.849 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:41:51.849 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@302 -- # remove_spdk_ns 00:41:51.849 05:42:05 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:41:51.849 05:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 14> /dev/null' 00:41:51.849 05:42:05 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:41:53.751 05:42:07 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:41:53.751 00:41:53.751 real 0m22.859s 00:41:53.751 user 0m39.780s 00:41:53.751 sys 0m8.491s 00:41:53.751 05:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:53.751 05:42:07 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:41:53.751 ************************************ 00:41:53.751 END TEST nvmf_interrupt 00:41:53.751 ************************************ 00:41:53.751 00:41:53.751 real 35m25.631s 00:41:53.751 user 86m8.493s 00:41:53.751 sys 10m15.719s 00:41:53.751 05:42:07 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:41:53.751 05:42:07 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:53.751 ************************************ 00:41:53.752 END TEST nvmf_tcp 00:41:53.752 ************************************ 00:41:53.752 05:42:07 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:41:53.752 05:42:07 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:53.752 05:42:07 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:41:53.752 05:42:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:41:53.752 05:42:07 -- common/autotest_common.sh@10 -- # set +x 00:41:54.010 ************************************ 00:41:54.010 START TEST spdkcli_nvmf_tcp 00:41:54.010 ************************************ 00:41:54.010 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:41:54.010 * Looking for test storage... 00:41:54.010 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:41:54.010 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:41:54.010 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:41:54.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.011 --rc genhtml_branch_coverage=1 00:41:54.011 --rc genhtml_function_coverage=1 00:41:54.011 --rc genhtml_legend=1 00:41:54.011 --rc geninfo_all_blocks=1 00:41:54.011 --rc geninfo_unexecuted_blocks=1 00:41:54.011 00:41:54.011 ' 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:41:54.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.011 --rc genhtml_branch_coverage=1 00:41:54.011 --rc genhtml_function_coverage=1 00:41:54.011 --rc genhtml_legend=1 00:41:54.011 --rc geninfo_all_blocks=1 00:41:54.011 --rc geninfo_unexecuted_blocks=1 00:41:54.011 00:41:54.011 ' 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:41:54.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.011 --rc genhtml_branch_coverage=1 00:41:54.011 --rc genhtml_function_coverage=1 00:41:54.011 --rc genhtml_legend=1 00:41:54.011 --rc geninfo_all_blocks=1 00:41:54.011 --rc geninfo_unexecuted_blocks=1 00:41:54.011 00:41:54.011 ' 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:41:54.011 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:41:54.011 --rc genhtml_branch_coverage=1 00:41:54.011 --rc genhtml_function_coverage=1 00:41:54.011 --rc genhtml_legend=1 00:41:54.011 --rc geninfo_all_blocks=1 00:41:54.011 --rc geninfo_unexecuted_blocks=1 00:41:54.011 00:41:54.011 ' 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # : 0 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:41:54.011 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- nvmf/common.sh@55 -- # have_pci_nics=0 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=622605 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 622605 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 622605 ']' 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:41:54.011 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:41:54.011 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:54.270 [2024-12-15 05:42:07.733964] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:41:54.270 [2024-12-15 05:42:07.734016] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid622605 ] 00:41:54.270 [2024-12-15 05:42:07.807313] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:41:54.270 [2024-12-15 05:42:07.831721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:41:54.270 [2024-12-15 05:42:07.831722] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:41:54.270 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:41:54.270 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:41:54.270 05:42:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:41:54.270 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:41:54.270 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:54.270 05:42:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:41:54.527 05:42:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:41:54.527 05:42:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:41:54.527 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:41:54.527 05:42:07 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:41:54.527 05:42:07 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:41:54.528 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:41:54.528 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:41:54.528 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:41:54.528 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:41:54.528 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:41:54.528 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:41:54.528 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:54.528 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:54.528 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:41:54.528 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:41:54.528 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:41:54.528 ' 00:41:57.051 [2024-12-15 05:42:10.694566] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:41:58.423 [2024-12-15 05:42:12.030937] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:42:00.946 [2024-12-15 05:42:14.518534] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:42:03.472 [2024-12-15 05:42:16.685284] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:42:04.844 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:42:04.844 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:42:04.844 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:42:04.844 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:42:04.844 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:42:04.844 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:42:04.844 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:42:04.844 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:04.844 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:04.844 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:42:04.844 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:42:04.844 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:42:04.844 05:42:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:42:04.844 05:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:04.844 05:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:04.844 05:42:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:42:04.844 05:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:04.844 05:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:04.844 05:42:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:42:04.844 05:42:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:42:05.410 05:42:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:42:05.410 05:42:18 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:42:05.410 05:42:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:42:05.410 05:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:05.410 05:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:05.410 05:42:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:42:05.410 05:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:05.410 05:42:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:05.410 05:42:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:42:05.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:42:05.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:05.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:42:05.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:42:05.410 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:42:05.410 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:42:05.410 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:42:05.410 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:42:05.410 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:42:05.410 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:42:05.410 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:42:05.410 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:42:05.410 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:42:05.410 ' 00:42:11.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:42:11.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:42:11.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:11.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:42:11.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:42:11.964 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:42:11.964 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:42:11.964 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:42:11.964 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:42:11.964 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:42:11.964 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:42:11.964 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:42:11.964 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:42:11.964 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:42:11.964 05:42:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:42:11.964 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:11.964 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:11.964 05:42:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 622605 00:42:11.964 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 622605 ']' 00:42:11.964 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 622605 00:42:11.964 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:42:11.964 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:11.964 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 622605 00:42:11.964 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:11.964 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 622605' 00:42:11.965 killing process with pid 622605 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 622605 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 622605 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 622605 ']' 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 622605 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 622605 ']' 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 622605 00:42:11.965 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (622605) - No such process 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 622605 is not found' 00:42:11.965 Process with pid 622605 is not found 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:42:11.965 00:42:11.965 real 0m17.366s 00:42:11.965 user 0m38.238s 00:42:11.965 sys 0m0.888s 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:11.965 05:42:24 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:42:11.965 ************************************ 00:42:11.965 END TEST spdkcli_nvmf_tcp 00:42:11.965 ************************************ 00:42:11.965 05:42:24 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:11.965 05:42:24 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:42:11.965 05:42:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:11.965 05:42:24 -- common/autotest_common.sh@10 -- # set +x 00:42:11.965 ************************************ 00:42:11.965 START TEST nvmf_identify_passthru 00:42:11.965 ************************************ 00:42:11.965 05:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:42:11.965 * Looking for test storage... 00:42:11.965 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:11.965 05:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:11.965 05:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lcov --version 00:42:11.965 05:42:24 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:11.965 05:42:25 nvmf_identify_passthru -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:42:11.965 05:42:25 nvmf_identify_passthru -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:11.965 05:42:25 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:11.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.965 --rc genhtml_branch_coverage=1 00:42:11.965 --rc genhtml_function_coverage=1 00:42:11.965 --rc genhtml_legend=1 00:42:11.965 --rc geninfo_all_blocks=1 00:42:11.965 --rc geninfo_unexecuted_blocks=1 00:42:11.965 00:42:11.965 ' 00:42:11.965 05:42:25 nvmf_identify_passthru -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:11.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.965 --rc genhtml_branch_coverage=1 00:42:11.965 --rc genhtml_function_coverage=1 00:42:11.965 --rc genhtml_legend=1 00:42:11.965 --rc geninfo_all_blocks=1 00:42:11.965 --rc geninfo_unexecuted_blocks=1 00:42:11.965 00:42:11.965 ' 00:42:11.965 05:42:25 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:11.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.965 --rc genhtml_branch_coverage=1 00:42:11.965 --rc genhtml_function_coverage=1 00:42:11.965 --rc genhtml_legend=1 00:42:11.965 --rc geninfo_all_blocks=1 00:42:11.965 --rc geninfo_unexecuted_blocks=1 00:42:11.965 00:42:11.965 ' 00:42:11.965 05:42:25 nvmf_identify_passthru -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:11.965 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:11.965 --rc genhtml_branch_coverage=1 00:42:11.965 --rc genhtml_function_coverage=1 00:42:11.965 --rc genhtml_legend=1 00:42:11.965 --rc geninfo_all_blocks=1 00:42:11.965 --rc geninfo_unexecuted_blocks=1 00:42:11.965 00:42:11.965 ' 00:42:11.965 05:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:11.965 05:42:25 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:11.965 05:42:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.965 05:42:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.965 05:42:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.965 05:42:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:11.965 05:42:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@51 -- # : 0 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:11.965 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:11.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:11.966 05:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:11.966 05:42:25 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:42:11.966 05:42:25 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:11.966 05:42:25 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:11.966 05:42:25 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:11.966 05:42:25 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.966 05:42:25 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.966 05:42:25 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.966 05:42:25 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:42:11.966 05:42:25 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:11.966 05:42:25 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:11.966 05:42:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:11.966 05:42:25 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:11.966 05:42:25 nvmf_identify_passthru -- nvmf/common.sh@309 -- # xtrace_disable 00:42:11.966 05:42:25 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@315 -- # pci_devs=() 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@319 -- # net_devs=() 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@320 -- # e810=() 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@320 -- # local -ga e810 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@321 -- # x722=() 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@321 -- # local -ga x722 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@322 -- # mlx=() 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@322 -- # local -ga mlx 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:17.239 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:17.239 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:17.239 Found net devices under 0000:af:00.0: cvl_0_0 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:17.239 Found net devices under 0000:af:00.1: cvl_0_1 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@442 -- # is_hw=yes 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:17.239 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:17.498 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:17.498 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.385 ms 00:42:17.498 00:42:17.498 --- 10.0.0.2 ping statistics --- 00:42:17.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.498 rtt min/avg/max/mdev = 0.385/0.385/0.385/0.000 ms 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:17.498 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:17.498 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.212 ms 00:42:17.498 00:42:17.498 --- 10.0.0.1 ping statistics --- 00:42:17.498 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:17.498 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@450 -- # return 0 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@478 -- # '[' '' == iso ']' 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:17.498 05:42:30 nvmf_identify_passthru -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:17.498 05:42:31 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:17.498 05:42:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:42:17.498 05:42:31 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:42:17.498 05:42:31 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:42:17.498 05:42:31 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:42:17.498 05:42:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:42:17.498 05:42:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:42:17.498 05:42:31 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:42:21.687 05:42:35 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ7244049A1P0FGN 00:42:21.687 05:42:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:42:21.687 05:42:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:42:21.687 05:42:35 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:42:25.875 05:42:39 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:42:25.875 05:42:39 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:42:25.875 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:25.875 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:25.875 05:42:39 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:42:25.875 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:25.875 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:25.875 05:42:39 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=629711 00:42:25.875 05:42:39 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:42:25.875 05:42:39 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:42:25.875 05:42:39 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 629711 00:42:25.875 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 629711 ']' 00:42:25.875 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:25.875 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:25.875 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:25.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:25.875 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:25.875 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:25.875 [2024-12-15 05:42:39.488407] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:42:25.875 [2024-12-15 05:42:39.488453] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:26.133 [2024-12-15 05:42:39.564767] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:42:26.133 [2024-12-15 05:42:39.588536] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:26.133 [2024-12-15 05:42:39.588573] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:26.133 [2024-12-15 05:42:39.588580] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:26.133 [2024-12-15 05:42:39.588586] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:26.133 [2024-12-15 05:42:39.588592] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:26.133 [2024-12-15 05:42:39.589863] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:42:26.133 [2024-12-15 05:42:39.589976] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:42:26.133 [2024-12-15 05:42:39.590082] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:26.133 [2024-12-15 05:42:39.590083] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:42:26.133 05:42:39 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:26.133 INFO: Log level set to 20 00:42:26.133 INFO: Requests: 00:42:26.133 { 00:42:26.133 "jsonrpc": "2.0", 00:42:26.133 "method": "nvmf_set_config", 00:42:26.133 "id": 1, 00:42:26.133 "params": { 00:42:26.133 "admin_cmd_passthru": { 00:42:26.133 "identify_ctrlr": true 00:42:26.133 } 00:42:26.133 } 00:42:26.133 } 00:42:26.133 00:42:26.133 INFO: response: 00:42:26.133 { 00:42:26.133 "jsonrpc": "2.0", 00:42:26.133 "id": 1, 00:42:26.133 "result": true 00:42:26.133 } 00:42:26.133 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.133 05:42:39 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:26.133 INFO: Setting log level to 20 00:42:26.133 INFO: Setting log level to 20 00:42:26.133 INFO: Log level set to 20 00:42:26.133 INFO: Log level set to 20 00:42:26.133 INFO: Requests: 00:42:26.133 { 00:42:26.133 "jsonrpc": "2.0", 00:42:26.133 "method": "framework_start_init", 00:42:26.133 "id": 1 00:42:26.133 } 00:42:26.133 00:42:26.133 INFO: Requests: 00:42:26.133 { 00:42:26.133 "jsonrpc": "2.0", 00:42:26.133 "method": "framework_start_init", 00:42:26.133 "id": 1 00:42:26.133 } 00:42:26.133 00:42:26.133 [2024-12-15 05:42:39.716434] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:42:26.133 INFO: response: 00:42:26.133 { 00:42:26.133 "jsonrpc": "2.0", 00:42:26.133 "id": 1, 00:42:26.133 "result": true 00:42:26.133 } 00:42:26.133 00:42:26.133 INFO: response: 00:42:26.133 { 00:42:26.133 "jsonrpc": "2.0", 00:42:26.133 "id": 1, 00:42:26.133 "result": true 00:42:26.133 } 00:42:26.133 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.133 05:42:39 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:26.133 INFO: Setting log level to 40 00:42:26.133 INFO: Setting log level to 40 00:42:26.133 INFO: Setting log level to 40 00:42:26.133 [2024-12-15 05:42:39.729706] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:26.133 05:42:39 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:26.133 05:42:39 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:26.133 05:42:39 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:29.418 Nvme0n1 00:42:29.418 05:42:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.418 05:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:42:29.418 05:42:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.418 05:42:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:29.418 05:42:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.418 05:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:42:29.418 05:42:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.418 05:42:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:29.418 05:42:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.418 05:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:29.418 05:42:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.418 05:42:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:29.418 [2024-12-15 05:42:42.644247] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:29.418 05:42:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.419 05:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:42:29.419 05:42:42 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.419 05:42:42 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:29.419 [ 00:42:29.419 { 00:42:29.419 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:42:29.419 "subtype": "Discovery", 00:42:29.419 "listen_addresses": [], 00:42:29.419 "allow_any_host": true, 00:42:29.419 "hosts": [] 00:42:29.419 }, 00:42:29.419 { 00:42:29.419 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:42:29.419 "subtype": "NVMe", 00:42:29.419 "listen_addresses": [ 00:42:29.419 { 00:42:29.419 "trtype": "TCP", 00:42:29.419 "adrfam": "IPv4", 00:42:29.419 "traddr": "10.0.0.2", 00:42:29.419 "trsvcid": "4420" 00:42:29.419 } 00:42:29.419 ], 00:42:29.419 "allow_any_host": true, 00:42:29.419 "hosts": [], 00:42:29.419 "serial_number": "SPDK00000000000001", 00:42:29.419 "model_number": "SPDK bdev Controller", 00:42:29.419 "max_namespaces": 1, 00:42:29.419 "min_cntlid": 1, 00:42:29.419 "max_cntlid": 65519, 00:42:29.419 "namespaces": [ 00:42:29.419 { 00:42:29.419 "nsid": 1, 00:42:29.419 "bdev_name": "Nvme0n1", 00:42:29.419 "name": "Nvme0n1", 00:42:29.419 "nguid": "E0757E05DF3043D8A124842672DBBEA8", 00:42:29.419 "uuid": "e0757e05-df30-43d8-a124-842672dbbea8" 00:42:29.419 } 00:42:29.419 ] 00:42:29.419 } 00:42:29.419 ] 00:42:29.419 05:42:42 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.419 05:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:29.419 05:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:42:29.419 05:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:42:29.419 05:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ7244049A1P0FGN 00:42:29.419 05:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:42:29.419 05:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:42:29.419 05:42:42 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:42:29.419 05:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:42:29.419 05:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ7244049A1P0FGN '!=' BTLJ7244049A1P0FGN ']' 00:42:29.419 05:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:42:29.419 05:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:42:29.419 05:42:43 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:29.419 05:42:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:29.419 05:42:43 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:29.419 05:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:42:29.419 05:42:43 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:42:29.419 05:42:43 nvmf_identify_passthru -- nvmf/common.sh@516 -- # nvmfcleanup 00:42:29.419 05:42:43 nvmf_identify_passthru -- nvmf/common.sh@121 -- # sync 00:42:29.419 05:42:43 nvmf_identify_passthru -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:42:29.419 05:42:43 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set +e 00:42:29.419 05:42:43 nvmf_identify_passthru -- nvmf/common.sh@125 -- # for i in {1..20} 00:42:29.419 05:42:43 nvmf_identify_passthru -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:42:29.678 rmmod nvme_tcp 00:42:29.678 rmmod nvme_fabrics 00:42:29.678 rmmod nvme_keyring 00:42:29.678 05:42:43 nvmf_identify_passthru -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:42:29.678 05:42:43 nvmf_identify_passthru -- nvmf/common.sh@128 -- # set -e 00:42:29.678 05:42:43 nvmf_identify_passthru -- nvmf/common.sh@129 -- # return 0 00:42:29.678 05:42:43 nvmf_identify_passthru -- nvmf/common.sh@517 -- # '[' -n 629711 ']' 00:42:29.678 05:42:43 nvmf_identify_passthru -- nvmf/common.sh@518 -- # killprocess 629711 00:42:29.678 05:42:43 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 629711 ']' 00:42:29.678 05:42:43 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 629711 00:42:29.678 05:42:43 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:42:29.678 05:42:43 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:42:29.678 05:42:43 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 629711 00:42:29.678 05:42:43 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:42:29.678 05:42:43 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:42:29.678 05:42:43 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 629711' 00:42:29.678 killing process with pid 629711 00:42:29.678 05:42:43 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 629711 00:42:29.678 05:42:43 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 629711 00:42:31.054 05:42:44 nvmf_identify_passthru -- nvmf/common.sh@520 -- # '[' '' == iso ']' 00:42:31.054 05:42:44 nvmf_identify_passthru -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:42:31.054 05:42:44 nvmf_identify_passthru -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:42:31.054 05:42:44 nvmf_identify_passthru -- nvmf/common.sh@297 -- # iptr 00:42:31.054 05:42:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-save 00:42:31.054 05:42:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:42:31.054 05:42:44 nvmf_identify_passthru -- nvmf/common.sh@791 -- # iptables-restore 00:42:31.054 05:42:44 nvmf_identify_passthru -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:42:31.054 05:42:44 nvmf_identify_passthru -- nvmf/common.sh@302 -- # remove_spdk_ns 00:42:31.054 05:42:44 nvmf_identify_passthru -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:31.054 05:42:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:31.054 05:42:44 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:33.591 05:42:46 nvmf_identify_passthru -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:42:33.591 00:42:33.591 real 0m21.890s 00:42:33.591 user 0m28.006s 00:42:33.591 sys 0m5.292s 00:42:33.591 05:42:46 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:33.591 05:42:46 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:42:33.591 ************************************ 00:42:33.591 END TEST nvmf_identify_passthru 00:42:33.591 ************************************ 00:42:33.591 05:42:46 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:33.591 05:42:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:33.591 05:42:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:33.591 05:42:46 -- common/autotest_common.sh@10 -- # set +x 00:42:33.591 ************************************ 00:42:33.591 START TEST nvmf_dif 00:42:33.591 ************************************ 00:42:33.591 05:42:46 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:42:33.591 * Looking for test storage... 00:42:33.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:42:33.591 05:42:46 nvmf_dif -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:42:33.591 05:42:46 nvmf_dif -- common/autotest_common.sh@1711 -- # lcov --version 00:42:33.591 05:42:46 nvmf_dif -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:42:33.591 05:42:47 nvmf_dif -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:42:33.591 05:42:47 nvmf_dif -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:42:33.591 05:42:47 nvmf_dif -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:42:33.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.591 --rc genhtml_branch_coverage=1 00:42:33.591 --rc genhtml_function_coverage=1 00:42:33.591 --rc genhtml_legend=1 00:42:33.591 --rc geninfo_all_blocks=1 00:42:33.591 --rc geninfo_unexecuted_blocks=1 00:42:33.591 00:42:33.591 ' 00:42:33.591 05:42:47 nvmf_dif -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:42:33.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.591 --rc genhtml_branch_coverage=1 00:42:33.591 --rc genhtml_function_coverage=1 00:42:33.591 --rc genhtml_legend=1 00:42:33.591 --rc geninfo_all_blocks=1 00:42:33.591 --rc geninfo_unexecuted_blocks=1 00:42:33.591 00:42:33.591 ' 00:42:33.591 05:42:47 nvmf_dif -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:42:33.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.591 --rc genhtml_branch_coverage=1 00:42:33.591 --rc genhtml_function_coverage=1 00:42:33.591 --rc genhtml_legend=1 00:42:33.591 --rc geninfo_all_blocks=1 00:42:33.591 --rc geninfo_unexecuted_blocks=1 00:42:33.591 00:42:33.591 ' 00:42:33.591 05:42:47 nvmf_dif -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:42:33.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:42:33.591 --rc genhtml_branch_coverage=1 00:42:33.591 --rc genhtml_function_coverage=1 00:42:33.591 --rc genhtml_legend=1 00:42:33.591 --rc geninfo_all_blocks=1 00:42:33.591 --rc geninfo_unexecuted_blocks=1 00:42:33.591 00:42:33.591 ' 00:42:33.591 05:42:47 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:42:33.591 05:42:47 nvmf_dif -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:42:33.591 05:42:47 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:42:33.591 05:42:47 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.591 05:42:47 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.591 05:42:47 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.592 05:42:47 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:42:33.592 05:42:47 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@51 -- # : 0 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:42:33.592 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@55 -- # have_pci_nics=0 00:42:33.592 05:42:47 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:42:33.592 05:42:47 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:42:33.592 05:42:47 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:42:33.592 05:42:47 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:42:33.592 05:42:47 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@476 -- # prepare_net_devs 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@438 -- # local -g is_hw=no 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@440 -- # remove_spdk_ns 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:42:33.592 05:42:47 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:42:33.592 05:42:47 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:42:33.592 05:42:47 nvmf_dif -- nvmf/common.sh@309 -- # xtrace_disable 00:42:33.592 05:42:47 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@315 -- # pci_devs=() 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@315 -- # local -a pci_devs 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@316 -- # pci_net_devs=() 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@317 -- # pci_drivers=() 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@317 -- # local -A pci_drivers 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@319 -- # net_devs=() 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@319 -- # local -ga net_devs 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@320 -- # e810=() 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@320 -- # local -ga e810 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@321 -- # x722=() 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@321 -- # local -ga x722 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@322 -- # mlx=() 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@322 -- # local -ga mlx 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:42:40.162 Found 0000:af:00.0 (0x8086 - 0x159b) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:42:40.162 Found 0000:af:00.1 (0x8086 - 0x159b) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:42:40.162 Found net devices under 0000:af:00.0: cvl_0_0 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@418 -- # [[ up == up ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:42:40.162 Found net devices under 0000:af:00.1: cvl_0_1 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@442 -- # is_hw=yes 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:42:40.162 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:42:40.162 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.393 ms 00:42:40.162 00:42:40.162 --- 10.0.0.2 ping statistics --- 00:42:40.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:40.162 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:42:40.162 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:42:40.162 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.232 ms 00:42:40.162 00:42:40.162 --- 10.0.0.1 ping statistics --- 00:42:40.162 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:42:40.162 rtt min/avg/max/mdev = 0.232/0.232/0.232/0.000 ms 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@450 -- # return 0 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:42:40.162 05:42:52 nvmf_dif -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:42:42.071 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:42:42.071 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:42:42.071 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:42:42.071 05:42:55 nvmf_dif -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:42:42.071 05:42:55 nvmf_dif -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:42:42.071 05:42:55 nvmf_dif -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:42:42.071 05:42:55 nvmf_dif -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:42:42.071 05:42:55 nvmf_dif -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:42:42.071 05:42:55 nvmf_dif -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:42:42.330 05:42:55 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:42:42.330 05:42:55 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:42:42.330 05:42:55 nvmf_dif -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:42:42.330 05:42:55 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:42:42.330 05:42:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:42.330 05:42:55 nvmf_dif -- nvmf/common.sh@509 -- # nvmfpid=635079 00:42:42.330 05:42:55 nvmf_dif -- nvmf/common.sh@510 -- # waitforlisten 635079 00:42:42.330 05:42:55 nvmf_dif -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:42:42.330 05:42:55 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 635079 ']' 00:42:42.330 05:42:55 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:42:42.330 05:42:55 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:42:42.330 05:42:55 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:42:42.330 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:42:42.330 05:42:55 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:42:42.330 05:42:55 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:42.330 [2024-12-15 05:42:55.840074] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:42:42.330 [2024-12-15 05:42:55.840122] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:42:42.330 [2024-12-15 05:42:55.918276] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:42:42.330 [2024-12-15 05:42:55.939824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:42:42.330 [2024-12-15 05:42:55.939859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:42:42.330 [2024-12-15 05:42:55.939866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:42:42.330 [2024-12-15 05:42:55.939872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:42:42.330 [2024-12-15 05:42:55.939877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:42:42.330 [2024-12-15 05:42:55.940370] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:42:42.589 05:42:56 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:42:42.589 05:42:56 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:42:42.589 05:42:56 nvmf_dif -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:42:42.589 05:42:56 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:42:42.589 05:42:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:42.589 05:42:56 nvmf_dif -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:42:42.589 05:42:56 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:42:42.589 05:42:56 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:42:42.589 05:42:56 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.589 05:42:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:42.589 [2024-12-15 05:42:56.067393] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:42:42.589 05:42:56 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.589 05:42:56 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:42:42.589 05:42:56 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:42.589 05:42:56 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:42.589 05:42:56 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:42.589 ************************************ 00:42:42.589 START TEST fio_dif_1_default 00:42:42.589 ************************************ 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:42.589 bdev_null0 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:42.589 [2024-12-15 05:42:56.139694] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # config=() 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@560 -- # local subsystem config 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:42.589 { 00:42:42.589 "params": { 00:42:42.589 "name": "Nvme$subsystem", 00:42:42.589 "trtype": "$TEST_TRANSPORT", 00:42:42.589 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:42.589 "adrfam": "ipv4", 00:42:42.589 "trsvcid": "$NVMF_PORT", 00:42:42.589 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:42.589 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:42.589 "hdgst": ${hdgst:-false}, 00:42:42.589 "ddgst": ${ddgst:-false} 00:42:42.589 }, 00:42:42.589 "method": "bdev_nvme_attach_controller" 00:42:42.589 } 00:42:42.589 EOF 00:42:42.589 )") 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@582 -- # cat 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@584 -- # jq . 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@585 -- # IFS=, 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:42.589 "params": { 00:42:42.589 "name": "Nvme0", 00:42:42.589 "trtype": "tcp", 00:42:42.589 "traddr": "10.0.0.2", 00:42:42.589 "adrfam": "ipv4", 00:42:42.589 "trsvcid": "4420", 00:42:42.589 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:42.589 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:42.589 "hdgst": false, 00:42:42.589 "ddgst": false 00:42:42.589 }, 00:42:42.589 "method": "bdev_nvme_attach_controller" 00:42:42.589 }' 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:42.589 05:42:56 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:43.155 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:43.155 fio-3.35 00:42:43.155 Starting 1 thread 00:42:55.355 00:42:55.355 filename0: (groupid=0, jobs=1): err= 0: pid=635443: Sun Dec 15 05:43:07 2024 00:42:55.355 read: IOPS=96, BW=385KiB/s (394kB/s)(3856KiB/10015msec) 00:42:55.355 slat (nsec): min=5897, max=34876, avg=6472.36, stdev=1676.22 00:42:55.355 clat (usec): min=40871, max=42973, avg=41537.51, stdev=504.01 00:42:55.355 lat (usec): min=40877, max=43006, avg=41543.98, stdev=504.16 00:42:55.355 clat percentiles (usec): 00:42:55.355 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:42:55.355 | 30.00th=[41157], 40.00th=[41157], 50.00th=[42206], 60.00th=[42206], 00:42:55.355 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:42:55.355 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:42:55.355 | 99.99th=[42730] 00:42:55.355 bw ( KiB/s): min= 352, max= 416, per=99.73%, avg=384.00, stdev=10.38, samples=20 00:42:55.355 iops : min= 88, max= 104, avg=96.00, stdev= 2.60, samples=20 00:42:55.355 lat (msec) : 50=100.00% 00:42:55.355 cpu : usr=92.05%, sys=7.69%, ctx=20, majf=0, minf=0 00:42:55.355 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:42:55.355 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:55.355 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:42:55.355 issued rwts: total=964,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:42:55.355 latency : target=0, window=0, percentile=100.00%, depth=4 00:42:55.355 00:42:55.355 Run status group 0 (all jobs): 00:42:55.355 READ: bw=385KiB/s (394kB/s), 385KiB/s-385KiB/s (394kB/s-394kB/s), io=3856KiB (3949kB), run=10015-10015msec 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 00:42:55.355 real 0m11.335s 00:42:55.355 user 0m15.894s 00:42:55.355 sys 0m1.148s 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 ************************************ 00:42:55.355 END TEST fio_dif_1_default 00:42:55.355 ************************************ 00:42:55.355 05:43:07 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:42:55.355 05:43:07 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:42:55.355 05:43:07 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:42:55.355 05:43:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 ************************************ 00:42:55.355 START TEST fio_dif_1_multi_subsystems 00:42:55.355 ************************************ 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 bdev_null0 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.355 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.356 [2024-12-15 05:43:07.547280] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.356 bdev_null1 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # config=() 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@560 -- # local subsystem config 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:55.356 { 00:42:55.356 "params": { 00:42:55.356 "name": "Nvme$subsystem", 00:42:55.356 "trtype": "$TEST_TRANSPORT", 00:42:55.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:55.356 "adrfam": "ipv4", 00:42:55.356 "trsvcid": "$NVMF_PORT", 00:42:55.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:55.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:55.356 "hdgst": ${hdgst:-false}, 00:42:55.356 "ddgst": ${ddgst:-false} 00:42:55.356 }, 00:42:55.356 "method": "bdev_nvme_attach_controller" 00:42:55.356 } 00:42:55.356 EOF 00:42:55.356 )") 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:42:55.356 { 00:42:55.356 "params": { 00:42:55.356 "name": "Nvme$subsystem", 00:42:55.356 "trtype": "$TEST_TRANSPORT", 00:42:55.356 "traddr": "$NVMF_FIRST_TARGET_IP", 00:42:55.356 "adrfam": "ipv4", 00:42:55.356 "trsvcid": "$NVMF_PORT", 00:42:55.356 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:42:55.356 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:42:55.356 "hdgst": ${hdgst:-false}, 00:42:55.356 "ddgst": ${ddgst:-false} 00:42:55.356 }, 00:42:55.356 "method": "bdev_nvme_attach_controller" 00:42:55.356 } 00:42:55.356 EOF 00:42:55.356 )") 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@582 -- # cat 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@584 -- # jq . 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@585 -- # IFS=, 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:42:55.356 "params": { 00:42:55.356 "name": "Nvme0", 00:42:55.356 "trtype": "tcp", 00:42:55.356 "traddr": "10.0.0.2", 00:42:55.356 "adrfam": "ipv4", 00:42:55.356 "trsvcid": "4420", 00:42:55.356 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:42:55.356 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:42:55.356 "hdgst": false, 00:42:55.356 "ddgst": false 00:42:55.356 }, 00:42:55.356 "method": "bdev_nvme_attach_controller" 00:42:55.356 },{ 00:42:55.356 "params": { 00:42:55.356 "name": "Nvme1", 00:42:55.356 "trtype": "tcp", 00:42:55.356 "traddr": "10.0.0.2", 00:42:55.356 "adrfam": "ipv4", 00:42:55.356 "trsvcid": "4420", 00:42:55.356 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:42:55.356 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:42:55.356 "hdgst": false, 00:42:55.356 "ddgst": false 00:42:55.356 }, 00:42:55.356 "method": "bdev_nvme_attach_controller" 00:42:55.356 }' 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:42:55.356 05:43:07 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:42:55.356 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:55.356 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:42:55.356 fio-3.35 00:42:55.356 Starting 2 threads 00:43:05.327 00:43:05.327 filename0: (groupid=0, jobs=1): err= 0: pid=637358: Sun Dec 15 05:43:18 2024 00:43:05.327 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10007msec) 00:43:05.327 slat (nsec): min=6044, max=39787, avg=8170.02, stdev=2766.58 00:43:05.327 clat (usec): min=40813, max=42828, avg=40987.53, stdev=136.90 00:43:05.327 lat (usec): min=40819, max=42839, avg=40995.70, stdev=137.41 00:43:05.327 clat percentiles (usec): 00:43:05.327 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:43:05.327 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:05.327 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:05.327 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42730], 99.95th=[42730], 00:43:05.327 | 99.99th=[42730] 00:43:05.327 bw ( KiB/s): min= 384, max= 416, per=49.86%, avg=389.05, stdev=11.99, samples=19 00:43:05.327 iops : min= 96, max= 104, avg=97.26, stdev= 3.00, samples=19 00:43:05.327 lat (msec) : 50=100.00% 00:43:05.327 cpu : usr=96.67%, sys=3.08%, ctx=7, majf=0, minf=37 00:43:05.327 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:05.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.327 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.327 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:05.327 filename1: (groupid=0, jobs=1): err= 0: pid=637359: Sun Dec 15 05:43:18 2024 00:43:05.327 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10008msec) 00:43:05.327 slat (nsec): min=6037, max=40905, avg=8113.43, stdev=2739.86 00:43:05.327 clat (usec): min=40810, max=42040, avg=40992.04, stdev=132.40 00:43:05.327 lat (usec): min=40821, max=42052, avg=41000.15, stdev=132.85 00:43:05.327 clat percentiles (usec): 00:43:05.327 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:43:05.327 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:43:05.327 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:43:05.327 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:43:05.327 | 99.99th=[42206] 00:43:05.327 bw ( KiB/s): min= 384, max= 416, per=49.73%, avg=388.80, stdev=11.72, samples=20 00:43:05.327 iops : min= 96, max= 104, avg=97.20, stdev= 2.93, samples=20 00:43:05.327 lat (msec) : 50=100.00% 00:43:05.327 cpu : usr=96.53%, sys=3.22%, ctx=13, majf=0, minf=100 00:43:05.327 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:05.327 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.327 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:05.327 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:05.327 latency : target=0, window=0, percentile=100.00%, depth=4 00:43:05.327 00:43:05.327 Run status group 0 (all jobs): 00:43:05.327 READ: bw=780KiB/s (799kB/s), 390KiB/s-390KiB/s (399kB/s-399kB/s), io=7808KiB (7995kB), run=10007-10008msec 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.327 00:43:05.327 real 0m11.477s 00:43:05.327 user 0m26.862s 00:43:05.327 sys 0m1.022s 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:05.327 05:43:18 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:43:05.328 ************************************ 00:43:05.328 END TEST fio_dif_1_multi_subsystems 00:43:05.328 ************************************ 00:43:05.586 05:43:19 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:43:05.586 05:43:19 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:05.586 05:43:19 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:05.586 05:43:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:05.586 ************************************ 00:43:05.586 START TEST fio_dif_rand_params 00:43:05.586 ************************************ 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.586 bdev_null0 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:05.586 [2024-12-15 05:43:19.100794] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:05.586 { 00:43:05.586 "params": { 00:43:05.586 "name": "Nvme$subsystem", 00:43:05.586 "trtype": "$TEST_TRANSPORT", 00:43:05.586 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:05.586 "adrfam": "ipv4", 00:43:05.586 "trsvcid": "$NVMF_PORT", 00:43:05.586 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:05.586 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:05.586 "hdgst": ${hdgst:-false}, 00:43:05.586 "ddgst": ${ddgst:-false} 00:43:05.586 }, 00:43:05.586 "method": "bdev_nvme_attach_controller" 00:43:05.586 } 00:43:05.586 EOF 00:43:05.586 )") 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:05.586 "params": { 00:43:05.586 "name": "Nvme0", 00:43:05.586 "trtype": "tcp", 00:43:05.586 "traddr": "10.0.0.2", 00:43:05.586 "adrfam": "ipv4", 00:43:05.586 "trsvcid": "4420", 00:43:05.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:05.586 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:05.586 "hdgst": false, 00:43:05.586 "ddgst": false 00:43:05.586 }, 00:43:05.586 "method": "bdev_nvme_attach_controller" 00:43:05.586 }' 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:05.586 05:43:19 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:05.845 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:05.845 ... 00:43:05.845 fio-3.35 00:43:05.845 Starting 3 threads 00:43:12.407 00:43:12.407 filename0: (groupid=0, jobs=1): err= 0: pid=639264: Sun Dec 15 05:43:25 2024 00:43:12.407 read: IOPS=333, BW=41.7MiB/s (43.7MB/s)(209MiB/5008msec) 00:43:12.407 slat (nsec): min=6299, max=42921, avg=17241.50, stdev=6392.73 00:43:12.407 clat (usec): min=3970, max=51402, avg=8974.42, stdev=4593.23 00:43:12.407 lat (usec): min=3980, max=51425, avg=8991.66, stdev=4592.96 00:43:12.407 clat percentiles (usec): 00:43:12.407 | 1.00th=[ 5669], 5.00th=[ 6521], 10.00th=[ 7308], 20.00th=[ 7701], 00:43:12.407 | 30.00th=[ 8029], 40.00th=[ 8291], 50.00th=[ 8455], 60.00th=[ 8717], 00:43:12.407 | 70.00th=[ 8979], 80.00th=[ 9372], 90.00th=[ 9765], 95.00th=[10421], 00:43:12.407 | 99.00th=[47449], 99.50th=[49021], 99.90th=[50594], 99.95th=[51643], 00:43:12.407 | 99.99th=[51643] 00:43:12.407 bw ( KiB/s): min=27648, max=47104, per=35.46%, avg=42675.20, stdev=5776.32, samples=10 00:43:12.407 iops : min= 216, max= 368, avg=333.40, stdev=45.13, samples=10 00:43:12.407 lat (msec) : 4=0.12%, 10=92.40%, 20=6.23%, 50=0.96%, 100=0.30% 00:43:12.407 cpu : usr=96.44%, sys=3.24%, ctx=11, majf=0, minf=0 00:43:12.407 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:12.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.407 issued rwts: total=1670,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.407 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:12.407 filename0: (groupid=0, jobs=1): err= 0: pid=639266: Sun Dec 15 05:43:25 2024 00:43:12.407 read: IOPS=307, BW=38.5MiB/s (40.3MB/s)(194MiB/5045msec) 00:43:12.407 slat (nsec): min=6281, max=50722, avg=23822.42, stdev=7849.54 00:43:12.407 clat (usec): min=3375, max=90256, avg=9699.26, stdev=4985.48 00:43:12.407 lat (usec): min=3382, max=90280, avg=9723.08, stdev=4985.48 00:43:12.407 clat percentiles (usec): 00:43:12.407 | 1.00th=[ 3621], 5.00th=[ 6390], 10.00th=[ 7308], 20.00th=[ 8291], 00:43:12.407 | 30.00th=[ 8717], 40.00th=[ 8979], 50.00th=[ 9372], 60.00th=[ 9634], 00:43:12.407 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10945], 95.00th=[11469], 00:43:12.407 | 99.00th=[45876], 99.50th=[50070], 99.90th=[51643], 99.95th=[90702], 00:43:12.407 | 99.99th=[90702] 00:43:12.407 bw ( KiB/s): min=34560, max=45824, per=32.95%, avg=39654.40, stdev=3022.90, samples=10 00:43:12.407 iops : min= 270, max= 358, avg=309.80, stdev=23.62, samples=10 00:43:12.407 lat (msec) : 4=2.06%, 10=68.69%, 20=28.03%, 50=0.71%, 100=0.52% 00:43:12.407 cpu : usr=96.47%, sys=3.21%, ctx=10, majf=0, minf=11 00:43:12.407 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:12.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.407 issued rwts: total=1552,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.407 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:12.407 filename0: (groupid=0, jobs=1): err= 0: pid=639267: Sun Dec 15 05:43:25 2024 00:43:12.407 read: IOPS=303, BW=38.0MiB/s (39.8MB/s)(190MiB/5004msec) 00:43:12.407 slat (nsec): min=6323, max=55653, avg=17185.33, stdev=6144.15 00:43:12.407 clat (usec): min=4090, max=51775, avg=9849.62, stdev=3917.95 00:43:12.407 lat (usec): min=4104, max=51788, avg=9866.80, stdev=3918.00 00:43:12.407 clat percentiles (usec): 00:43:12.407 | 1.00th=[ 5276], 5.00th=[ 6325], 10.00th=[ 7242], 20.00th=[ 8455], 00:43:12.407 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9765], 60.00th=[10159], 00:43:12.407 | 70.00th=[10552], 80.00th=[10945], 90.00th=[11338], 95.00th=[11863], 00:43:12.407 | 99.00th=[13435], 99.50th=[49021], 99.90th=[51643], 99.95th=[51643], 00:43:12.407 | 99.99th=[51643] 00:43:12.407 bw ( KiB/s): min=35328, max=41472, per=32.31%, avg=38886.40, stdev=1638.98, samples=10 00:43:12.407 iops : min= 276, max= 324, avg=303.80, stdev=12.80, samples=10 00:43:12.407 lat (msec) : 10=56.87%, 20=42.34%, 50=0.39%, 100=0.39% 00:43:12.407 cpu : usr=94.28%, sys=3.96%, ctx=390, majf=0, minf=0 00:43:12.407 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:12.407 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.407 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:12.407 issued rwts: total=1521,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:12.407 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:12.407 00:43:12.407 Run status group 0 (all jobs): 00:43:12.407 READ: bw=118MiB/s (123MB/s), 38.0MiB/s-41.7MiB/s (39.8MB/s-43.7MB/s), io=593MiB (622MB), run=5004-5045msec 00:43:12.407 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:43:12.407 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:12.407 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:12.407 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:12.407 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:12.407 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:12.407 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 bdev_null0 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 [2024-12-15 05:43:25.249979] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 bdev_null1 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 bdev_null2 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:12.408 { 00:43:12.408 "params": { 00:43:12.408 "name": "Nvme$subsystem", 00:43:12.408 "trtype": "$TEST_TRANSPORT", 00:43:12.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:12.408 "adrfam": "ipv4", 00:43:12.408 "trsvcid": "$NVMF_PORT", 00:43:12.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:12.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:12.408 "hdgst": ${hdgst:-false}, 00:43:12.408 "ddgst": ${ddgst:-false} 00:43:12.408 }, 00:43:12.408 "method": "bdev_nvme_attach_controller" 00:43:12.408 } 00:43:12.408 EOF 00:43:12.408 )") 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:12.408 { 00:43:12.408 "params": { 00:43:12.408 "name": "Nvme$subsystem", 00:43:12.408 "trtype": "$TEST_TRANSPORT", 00:43:12.408 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:12.408 "adrfam": "ipv4", 00:43:12.408 "trsvcid": "$NVMF_PORT", 00:43:12.408 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:12.408 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:12.408 "hdgst": ${hdgst:-false}, 00:43:12.408 "ddgst": ${ddgst:-false} 00:43:12.408 }, 00:43:12.408 "method": "bdev_nvme_attach_controller" 00:43:12.408 } 00:43:12.408 EOF 00:43:12.408 )") 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:12.408 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:12.409 { 00:43:12.409 "params": { 00:43:12.409 "name": "Nvme$subsystem", 00:43:12.409 "trtype": "$TEST_TRANSPORT", 00:43:12.409 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:12.409 "adrfam": "ipv4", 00:43:12.409 "trsvcid": "$NVMF_PORT", 00:43:12.409 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:12.409 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:12.409 "hdgst": ${hdgst:-false}, 00:43:12.409 "ddgst": ${ddgst:-false} 00:43:12.409 }, 00:43:12.409 "method": "bdev_nvme_attach_controller" 00:43:12.409 } 00:43:12.409 EOF 00:43:12.409 )") 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:12.409 "params": { 00:43:12.409 "name": "Nvme0", 00:43:12.409 "trtype": "tcp", 00:43:12.409 "traddr": "10.0.0.2", 00:43:12.409 "adrfam": "ipv4", 00:43:12.409 "trsvcid": "4420", 00:43:12.409 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:12.409 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:12.409 "hdgst": false, 00:43:12.409 "ddgst": false 00:43:12.409 }, 00:43:12.409 "method": "bdev_nvme_attach_controller" 00:43:12.409 },{ 00:43:12.409 "params": { 00:43:12.409 "name": "Nvme1", 00:43:12.409 "trtype": "tcp", 00:43:12.409 "traddr": "10.0.0.2", 00:43:12.409 "adrfam": "ipv4", 00:43:12.409 "trsvcid": "4420", 00:43:12.409 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:12.409 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:12.409 "hdgst": false, 00:43:12.409 "ddgst": false 00:43:12.409 }, 00:43:12.409 "method": "bdev_nvme_attach_controller" 00:43:12.409 },{ 00:43:12.409 "params": { 00:43:12.409 "name": "Nvme2", 00:43:12.409 "trtype": "tcp", 00:43:12.409 "traddr": "10.0.0.2", 00:43:12.409 "adrfam": "ipv4", 00:43:12.409 "trsvcid": "4420", 00:43:12.409 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:43:12.409 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:43:12.409 "hdgst": false, 00:43:12.409 "ddgst": false 00:43:12.409 }, 00:43:12.409 "method": "bdev_nvme_attach_controller" 00:43:12.409 }' 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:12.409 05:43:25 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:12.409 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:12.409 ... 00:43:12.409 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:12.409 ... 00:43:12.409 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:43:12.409 ... 00:43:12.409 fio-3.35 00:43:12.409 Starting 24 threads 00:43:24.609 00:43:24.609 filename0: (groupid=0, jobs=1): err= 0: pid=640301: Sun Dec 15 05:43:36 2024 00:43:24.609 read: IOPS=600, BW=2402KiB/s (2459kB/s)(23.5MiB/10020msec) 00:43:24.609 slat (usec): min=6, max=125, avg=38.56, stdev=21.66 00:43:24.609 clat (usec): min=10913, max=38430, avg=26330.10, stdev=1971.53 00:43:24.609 lat (usec): min=10929, max=38463, avg=26368.66, stdev=1976.64 00:43:24.609 clat percentiles (usec): 00:43:24.609 | 1.00th=[18482], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:43:24.609 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:43:24.609 | 70.00th=[27132], 80.00th=[27657], 90.00th=[28705], 95.00th=[29230], 00:43:24.609 | 99.00th=[30278], 99.50th=[30540], 99.90th=[31065], 99.95th=[34341], 00:43:24.609 | 99.99th=[38536] 00:43:24.609 bw ( KiB/s): min= 2171, max= 2560, per=4.18%, avg=2399.50, stdev=124.41, samples=20 00:43:24.609 iops : min= 542, max= 640, avg=599.80, stdev=31.21, samples=20 00:43:24.609 lat (msec) : 20=1.40%, 50=98.60% 00:43:24.609 cpu : usr=98.71%, sys=0.83%, ctx=47, majf=0, minf=55 00:43:24.609 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:24.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.609 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.609 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.609 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.609 filename0: (groupid=0, jobs=1): err= 0: pid=640302: Sun Dec 15 05:43:36 2024 00:43:24.609 read: IOPS=598, BW=2393KiB/s (2450kB/s)(23.4MiB/10004msec) 00:43:24.609 slat (nsec): min=7781, max=75275, avg=33944.49, stdev=16124.09 00:43:24.609 clat (usec): min=8144, max=54107, avg=26479.67, stdev=2460.04 00:43:24.609 lat (usec): min=8171, max=54120, avg=26513.61, stdev=2458.79 00:43:24.609 clat percentiles (usec): 00:43:24.609 | 1.00th=[23462], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:43:24.609 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870], 00:43:24.609 | 70.00th=[27132], 80.00th=[27919], 90.00th=[28705], 95.00th=[29230], 00:43:24.609 | 99.00th=[30802], 99.50th=[31065], 99.90th=[54264], 99.95th=[54264], 00:43:24.609 | 99.99th=[54264] 00:43:24.609 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2384.53, stdev=129.76, samples=19 00:43:24.609 iops : min= 542, max= 640, avg=596.05, stdev=32.54, samples=19 00:43:24.609 lat (msec) : 10=0.27%, 20=0.53%, 50=98.93%, 100=0.27% 00:43:24.609 cpu : usr=97.96%, sys=1.24%, ctx=109, majf=0, minf=43 00:43:24.609 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:24.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.609 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.609 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.609 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.609 filename0: (groupid=0, jobs=1): err= 0: pid=640303: Sun Dec 15 05:43:36 2024 00:43:24.609 read: IOPS=597, BW=2392KiB/s (2449kB/s)(23.4MiB/10007msec) 00:43:24.609 slat (nsec): min=7667, max=92317, avg=25755.22, stdev=18590.88 00:43:24.609 clat (usec): min=18171, max=31818, avg=26532.40, stdev=1615.37 00:43:24.609 lat (usec): min=18184, max=31843, avg=26558.16, stdev=1619.11 00:43:24.609 clat percentiles (usec): 00:43:24.609 | 1.00th=[24249], 5.00th=[24773], 10.00th=[25035], 20.00th=[25035], 00:43:24.609 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26346], 60.00th=[26870], 00:43:24.609 | 70.00th=[27132], 80.00th=[27919], 90.00th=[28705], 95.00th=[29230], 00:43:24.609 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31851], 99.95th=[31851], 00:43:24.609 | 99.99th=[31851] 00:43:24.610 bw ( KiB/s): min= 2299, max= 2560, per=4.17%, avg=2391.05, stdev=113.75, samples=19 00:43:24.610 iops : min= 574, max= 640, avg=597.68, stdev=28.51, samples=19 00:43:24.610 lat (msec) : 20=0.53%, 50=99.47% 00:43:24.610 cpu : usr=98.59%, sys=0.94%, ctx=85, majf=0, minf=40 00:43:24.610 IO depths : 1=5.9%, 2=12.1%, 4=25.0%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:43:24.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.610 filename0: (groupid=0, jobs=1): err= 0: pid=640304: Sun Dec 15 05:43:36 2024 00:43:24.610 read: IOPS=599, BW=2399KiB/s (2457kB/s)(23.4MiB/10004msec) 00:43:24.610 slat (usec): min=6, max=134, avg=47.32, stdev=18.35 00:43:24.610 clat (usec): min=9225, max=31058, avg=26236.86, stdev=1869.22 00:43:24.610 lat (usec): min=9245, max=31097, avg=26284.18, stdev=1874.36 00:43:24.610 clat percentiles (usec): 00:43:24.610 | 1.00th=[18744], 5.00th=[24511], 10.00th=[24511], 20.00th=[24773], 00:43:24.610 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:43:24.610 | 70.00th=[26870], 80.00th=[27657], 90.00th=[28443], 95.00th=[28967], 00:43:24.610 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30802], 99.95th=[31065], 00:43:24.610 | 99.99th=[31065] 00:43:24.610 bw ( KiB/s): min= 2176, max= 2560, per=4.19%, avg=2405.05, stdev=117.46, samples=19 00:43:24.610 iops : min= 544, max= 640, avg=601.26, stdev=29.37, samples=19 00:43:24.610 lat (msec) : 10=0.27%, 20=0.80%, 50=98.93% 00:43:24.610 cpu : usr=99.07%, sys=0.52%, ctx=35, majf=0, minf=33 00:43:24.610 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:24.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.610 filename0: (groupid=0, jobs=1): err= 0: pid=640305: Sun Dec 15 05:43:36 2024 00:43:24.610 read: IOPS=598, BW=2392KiB/s (2450kB/s)(23.4MiB/10006msec) 00:43:24.610 slat (nsec): min=6466, max=93455, avg=39132.14, stdev=19157.82 00:43:24.610 clat (usec): min=15118, max=34718, avg=26380.29, stdev=1640.63 00:43:24.610 lat (usec): min=15129, max=34744, avg=26419.42, stdev=1643.38 00:43:24.610 clat percentiles (usec): 00:43:24.610 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[24773], 00:43:24.610 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:43:24.610 | 70.00th=[27132], 80.00th=[27657], 90.00th=[28443], 95.00th=[29230], 00:43:24.610 | 99.00th=[30540], 99.50th=[30802], 99.90th=[30802], 99.95th=[31065], 00:43:24.610 | 99.99th=[34866] 00:43:24.610 bw ( KiB/s): min= 2299, max= 2560, per=4.17%, avg=2391.26, stdev=113.58, samples=19 00:43:24.610 iops : min= 574, max= 640, avg=597.74, stdev=28.46, samples=19 00:43:24.610 lat (msec) : 20=0.53%, 50=99.47% 00:43:24.610 cpu : usr=98.80%, sys=0.75%, ctx=36, majf=0, minf=32 00:43:24.610 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:24.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.610 filename0: (groupid=0, jobs=1): err= 0: pid=640306: Sun Dec 15 05:43:36 2024 00:43:24.610 read: IOPS=599, BW=2399KiB/s (2457kB/s)(23.4MiB/10004msec) 00:43:24.610 slat (nsec): min=7530, max=83866, avg=34432.71, stdev=15597.12 00:43:24.610 clat (usec): min=9134, max=31181, avg=26413.26, stdev=1900.55 00:43:24.610 lat (usec): min=9147, max=31197, avg=26447.69, stdev=1900.78 00:43:24.610 clat percentiles (usec): 00:43:24.610 | 1.00th=[18744], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:43:24.610 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870], 00:43:24.610 | 70.00th=[27132], 80.00th=[27657], 90.00th=[28705], 95.00th=[29230], 00:43:24.610 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:43:24.610 | 99.99th=[31065] 00:43:24.610 bw ( KiB/s): min= 2176, max= 2560, per=4.19%, avg=2405.05, stdev=117.46, samples=19 00:43:24.610 iops : min= 544, max= 640, avg=601.26, stdev=29.37, samples=19 00:43:24.610 lat (msec) : 10=0.27%, 20=0.80%, 50=98.93% 00:43:24.610 cpu : usr=98.30%, sys=1.17%, ctx=64, majf=0, minf=42 00:43:24.610 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:24.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.610 filename0: (groupid=0, jobs=1): err= 0: pid=640307: Sun Dec 15 05:43:36 2024 00:43:24.610 read: IOPS=598, BW=2392KiB/s (2450kB/s)(23.4MiB/10006msec) 00:43:24.610 slat (nsec): min=5600, max=89785, avg=45624.50, stdev=13837.90 00:43:24.610 clat (usec): min=9508, max=38688, avg=26362.47, stdev=1893.35 00:43:24.610 lat (usec): min=9522, max=38703, avg=26408.10, stdev=1894.74 00:43:24.610 clat percentiles (usec): 00:43:24.610 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24511], 20.00th=[24773], 00:43:24.610 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:43:24.610 | 70.00th=[27132], 80.00th=[27657], 90.00th=[28443], 95.00th=[29230], 00:43:24.610 | 99.00th=[30540], 99.50th=[30802], 99.90th=[38536], 99.95th=[38536], 00:43:24.610 | 99.99th=[38536] 00:43:24.610 bw ( KiB/s): min= 2171, max= 2560, per=4.17%, avg=2391.05, stdev=105.78, samples=19 00:43:24.610 iops : min= 542, max= 640, avg=597.68, stdev=26.57, samples=19 00:43:24.610 lat (msec) : 10=0.27%, 20=0.27%, 50=99.47% 00:43:24.610 cpu : usr=98.18%, sys=1.18%, ctx=77, majf=0, minf=35 00:43:24.610 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:24.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.610 filename0: (groupid=0, jobs=1): err= 0: pid=640308: Sun Dec 15 05:43:36 2024 00:43:24.610 read: IOPS=597, BW=2389KiB/s (2446kB/s)(23.3MiB/10003msec) 00:43:24.610 slat (nsec): min=7075, max=88384, avg=23848.17, stdev=16201.98 00:43:24.610 clat (usec): min=12165, max=43105, avg=26595.57, stdev=2231.52 00:43:24.610 lat (usec): min=12175, max=43131, avg=26619.42, stdev=2232.30 00:43:24.610 clat percentiles (usec): 00:43:24.610 | 1.00th=[22676], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:43:24.610 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870], 00:43:24.610 | 70.00th=[27132], 80.00th=[27919], 90.00th=[28705], 95.00th=[29754], 00:43:24.610 | 99.00th=[31065], 99.50th=[37487], 99.90th=[42730], 99.95th=[43254], 00:43:24.610 | 99.99th=[43254] 00:43:24.610 bw ( KiB/s): min= 2171, max= 2560, per=4.16%, avg=2387.05, stdev=119.75, samples=19 00:43:24.610 iops : min= 542, max= 640, avg=596.68, stdev=30.04, samples=19 00:43:24.610 lat (msec) : 20=0.90%, 50=99.10% 00:43:24.610 cpu : usr=98.55%, sys=1.01%, ctx=75, majf=0, minf=43 00:43:24.610 IO depths : 1=5.2%, 2=11.4%, 4=24.9%, 8=51.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:43:24.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 issued rwts: total=5974,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.610 filename1: (groupid=0, jobs=1): err= 0: pid=640309: Sun Dec 15 05:43:36 2024 00:43:24.610 read: IOPS=601, BW=2408KiB/s (2465kB/s)(23.5MiB/10005msec) 00:43:24.610 slat (nsec): min=6302, max=79398, avg=33411.31, stdev=16158.96 00:43:24.610 clat (usec): min=9553, max=55060, avg=26304.56, stdev=2998.15 00:43:24.610 lat (usec): min=9568, max=55078, avg=26337.97, stdev=2998.14 00:43:24.610 clat percentiles (usec): 00:43:24.610 | 1.00th=[16319], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:43:24.610 | 30.00th=[25035], 40.00th=[25297], 50.00th=[26346], 60.00th=[26870], 00:43:24.610 | 70.00th=[27132], 80.00th=[27919], 90.00th=[28705], 95.00th=[29492], 00:43:24.610 | 99.00th=[34866], 99.50th=[39060], 99.90th=[54789], 99.95th=[54789], 00:43:24.610 | 99.99th=[55313] 00:43:24.610 bw ( KiB/s): min= 2176, max= 2560, per=4.19%, avg=2407.05, stdev=119.70, samples=19 00:43:24.610 iops : min= 544, max= 640, avg=601.68, stdev=29.94, samples=19 00:43:24.610 lat (msec) : 10=0.23%, 20=2.26%, 50=97.24%, 100=0.27% 00:43:24.610 cpu : usr=96.88%, sys=1.82%, ctx=294, majf=0, minf=32 00:43:24.610 IO depths : 1=5.6%, 2=11.3%, 4=23.3%, 8=52.7%, 16=7.1%, 32=0.0%, >=64=0.0% 00:43:24.610 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 complete : 0=0.0%, 4=93.7%, 8=0.7%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.610 issued rwts: total=6022,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.610 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.610 filename1: (groupid=0, jobs=1): err= 0: pid=640310: Sun Dec 15 05:43:36 2024 00:43:24.610 read: IOPS=597, BW=2392KiB/s (2449kB/s)(23.4MiB/10007msec) 00:43:24.610 slat (nsec): min=8371, max=89988, avg=44057.30, stdev=14452.95 00:43:24.610 clat (usec): min=18112, max=35097, avg=26394.13, stdev=1666.04 00:43:24.610 lat (usec): min=18166, max=35152, avg=26438.18, stdev=1667.44 00:43:24.610 clat percentiles (usec): 00:43:24.610 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:43:24.610 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:43:24.610 | 70.00th=[27132], 80.00th=[27657], 90.00th=[28705], 95.00th=[29230], 00:43:24.610 | 99.00th=[30540], 99.50th=[30802], 99.90th=[32637], 99.95th=[32900], 00:43:24.610 | 99.99th=[34866] 00:43:24.610 bw ( KiB/s): min= 2299, max= 2560, per=4.17%, avg=2391.32, stdev=105.21, samples=19 00:43:24.610 iops : min= 574, max= 640, avg=597.79, stdev=26.34, samples=19 00:43:24.610 lat (msec) : 20=0.53%, 50=99.47% 00:43:24.611 cpu : usr=98.41%, sys=1.05%, ctx=64, majf=0, minf=40 00:43:24.611 IO depths : 1=6.1%, 2=12.3%, 4=25.0%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:43:24.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.611 filename1: (groupid=0, jobs=1): err= 0: pid=640311: Sun Dec 15 05:43:36 2024 00:43:24.611 read: IOPS=597, BW=2391KiB/s (2448kB/s)(23.4MiB/10011msec) 00:43:24.611 slat (usec): min=7, max=134, avg=46.57, stdev=18.68 00:43:24.611 clat (usec): min=15314, max=37964, avg=26336.96, stdev=1679.64 00:43:24.611 lat (usec): min=15377, max=37996, avg=26383.53, stdev=1682.84 00:43:24.611 clat percentiles (usec): 00:43:24.611 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24511], 20.00th=[24773], 00:43:24.611 | 30.00th=[25035], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:43:24.611 | 70.00th=[26870], 80.00th=[27657], 90.00th=[28443], 95.00th=[29230], 00:43:24.611 | 99.00th=[30540], 99.50th=[30802], 99.90th=[34866], 99.95th=[37487], 00:43:24.611 | 99.99th=[38011] 00:43:24.611 bw ( KiB/s): min= 2176, max= 2560, per=4.17%, avg=2391.26, stdev=121.04, samples=19 00:43:24.611 iops : min= 544, max= 640, avg=597.74, stdev=30.28, samples=19 00:43:24.611 lat (msec) : 20=0.57%, 50=99.43% 00:43:24.611 cpu : usr=98.81%, sys=0.74%, ctx=38, majf=0, minf=34 00:43:24.611 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:24.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.611 filename1: (groupid=0, jobs=1): err= 0: pid=640312: Sun Dec 15 05:43:36 2024 00:43:24.611 read: IOPS=602, BW=2410KiB/s (2468kB/s)(23.5MiB/10004msec) 00:43:24.611 slat (nsec): min=6651, max=76737, avg=37180.25, stdev=16032.93 00:43:24.611 clat (usec): min=5489, max=53861, avg=26236.98, stdev=2844.60 00:43:24.611 lat (usec): min=5496, max=53878, avg=26274.16, stdev=2846.42 00:43:24.611 clat percentiles (usec): 00:43:24.611 | 1.00th=[16712], 5.00th=[24249], 10.00th=[24511], 20.00th=[24773], 00:43:24.611 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:43:24.611 | 70.00th=[27132], 80.00th=[27919], 90.00th=[28705], 95.00th=[29230], 00:43:24.611 | 99.00th=[30802], 99.50th=[33162], 99.90th=[53740], 99.95th=[53740], 00:43:24.611 | 99.99th=[53740] 00:43:24.611 bw ( KiB/s): min= 2176, max= 2602, per=4.18%, avg=2400.53, stdev=143.47, samples=19 00:43:24.611 iops : min= 544, max= 650, avg=600.11, stdev=35.83, samples=19 00:43:24.611 lat (msec) : 10=0.60%, 20=1.68%, 50=97.46%, 100=0.27% 00:43:24.611 cpu : usr=98.07%, sys=1.24%, ctx=191, majf=0, minf=49 00:43:24.611 IO depths : 1=5.7%, 2=11.5%, 4=23.5%, 8=52.3%, 16=7.0%, 32=0.0%, >=64=0.0% 00:43:24.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 complete : 0=0.0%, 4=93.8%, 8=0.6%, 16=5.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 issued rwts: total=6028,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.611 filename1: (groupid=0, jobs=1): err= 0: pid=640313: Sun Dec 15 05:43:36 2024 00:43:24.611 read: IOPS=599, BW=2399KiB/s (2457kB/s)(23.4MiB/10004msec) 00:43:24.611 slat (usec): min=8, max=124, avg=43.81, stdev=19.04 00:43:24.611 clat (usec): min=9283, max=37163, avg=26301.47, stdev=2025.71 00:43:24.611 lat (usec): min=9297, max=37223, avg=26345.28, stdev=2030.95 00:43:24.611 clat percentiles (usec): 00:43:24.611 | 1.00th=[18482], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:43:24.611 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:43:24.611 | 70.00th=[26870], 80.00th=[27657], 90.00th=[28443], 95.00th=[29230], 00:43:24.611 | 99.00th=[30540], 99.50th=[31065], 99.90th=[36439], 99.95th=[36963], 00:43:24.611 | 99.99th=[36963] 00:43:24.611 bw ( KiB/s): min= 2176, max= 2560, per=4.19%, avg=2405.05, stdev=117.46, samples=19 00:43:24.611 iops : min= 544, max= 640, avg=601.26, stdev=29.37, samples=19 00:43:24.611 lat (msec) : 10=0.23%, 20=1.27%, 50=98.50% 00:43:24.611 cpu : usr=97.88%, sys=1.22%, ctx=373, majf=0, minf=55 00:43:24.611 IO depths : 1=5.5%, 2=11.7%, 4=25.0%, 8=50.8%, 16=7.0%, 32=0.0%, >=64=0.0% 00:43:24.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.611 filename1: (groupid=0, jobs=1): err= 0: pid=640314: Sun Dec 15 05:43:36 2024 00:43:24.611 read: IOPS=599, BW=2399KiB/s (2457kB/s)(23.4MiB/10004msec) 00:43:24.611 slat (usec): min=8, max=161, avg=42.39, stdev=21.96 00:43:24.611 clat (usec): min=9150, max=30959, avg=26321.59, stdev=1855.83 00:43:24.611 lat (usec): min=9170, max=31030, avg=26363.98, stdev=1861.38 00:43:24.611 clat percentiles (usec): 00:43:24.611 | 1.00th=[18744], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:43:24.611 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:43:24.611 | 70.00th=[27132], 80.00th=[27657], 90.00th=[28443], 95.00th=[28967], 00:43:24.611 | 99.00th=[30278], 99.50th=[30540], 99.90th=[30802], 99.95th=[30802], 00:43:24.611 | 99.99th=[31065] 00:43:24.611 bw ( KiB/s): min= 2176, max= 2560, per=4.19%, avg=2405.05, stdev=117.46, samples=19 00:43:24.611 iops : min= 544, max= 640, avg=601.26, stdev=29.37, samples=19 00:43:24.611 lat (msec) : 10=0.27%, 20=0.80%, 50=98.93% 00:43:24.611 cpu : usr=98.82%, sys=0.76%, ctx=17, majf=0, minf=55 00:43:24.611 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:24.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.611 filename1: (groupid=0, jobs=1): err= 0: pid=640315: Sun Dec 15 05:43:36 2024 00:43:24.611 read: IOPS=598, BW=2392KiB/s (2450kB/s)(23.4MiB/10006msec) 00:43:24.611 slat (nsec): min=6930, max=93437, avg=37770.62, stdev=18882.77 00:43:24.611 clat (usec): min=8120, max=55550, avg=26406.18, stdev=2449.94 00:43:24.611 lat (usec): min=8128, max=55566, avg=26443.95, stdev=2451.27 00:43:24.611 clat percentiles (usec): 00:43:24.611 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:43:24.611 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:43:24.611 | 70.00th=[27132], 80.00th=[27657], 90.00th=[28443], 95.00th=[29230], 00:43:24.611 | 99.00th=[30540], 99.50th=[30802], 99.90th=[55313], 99.95th=[55313], 00:43:24.611 | 99.99th=[55313] 00:43:24.611 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2384.32, stdev=130.12, samples=19 00:43:24.611 iops : min= 542, max= 640, avg=596.00, stdev=32.63, samples=19 00:43:24.611 lat (msec) : 10=0.27%, 20=0.53%, 50=98.93%, 100=0.27% 00:43:24.611 cpu : usr=98.46%, sys=1.04%, ctx=39, majf=0, minf=30 00:43:24.611 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:24.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.611 filename1: (groupid=0, jobs=1): err= 0: pid=640316: Sun Dec 15 05:43:36 2024 00:43:24.611 read: IOPS=598, BW=2392KiB/s (2450kB/s)(23.4MiB/10006msec) 00:43:24.611 slat (nsec): min=6427, max=79401, avg=21699.54, stdev=14613.19 00:43:24.611 clat (usec): min=12076, max=45607, avg=26542.78, stdev=2039.12 00:43:24.611 lat (usec): min=12090, max=45616, avg=26564.48, stdev=2039.39 00:43:24.611 clat percentiles (usec): 00:43:24.611 | 1.00th=[23987], 5.00th=[24773], 10.00th=[24773], 20.00th=[25035], 00:43:24.611 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870], 00:43:24.611 | 70.00th=[27132], 80.00th=[27919], 90.00th=[28705], 95.00th=[29492], 00:43:24.611 | 99.00th=[30802], 99.50th=[35390], 99.90th=[40633], 99.95th=[41157], 00:43:24.611 | 99.99th=[45351] 00:43:24.611 bw ( KiB/s): min= 2171, max= 2560, per=4.17%, avg=2391.26, stdev=119.98, samples=19 00:43:24.611 iops : min= 542, max= 640, avg=597.74, stdev=30.10, samples=19 00:43:24.611 lat (msec) : 20=0.77%, 50=99.23% 00:43:24.611 cpu : usr=98.75%, sys=0.84%, ctx=14, majf=0, minf=31 00:43:24.611 IO depths : 1=5.0%, 2=11.3%, 4=25.0%, 8=51.2%, 16=7.5%, 32=0.0%, >=64=0.0% 00:43:24.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.611 filename2: (groupid=0, jobs=1): err= 0: pid=640317: Sun Dec 15 05:43:36 2024 00:43:24.611 read: IOPS=599, BW=2399KiB/s (2457kB/s)(23.4MiB/10004msec) 00:43:24.611 slat (usec): min=8, max=105, avg=44.75, stdev=17.47 00:43:24.611 clat (usec): min=10805, max=31057, avg=26293.88, stdev=1854.89 00:43:24.611 lat (usec): min=10819, max=31097, avg=26338.63, stdev=1858.48 00:43:24.611 clat percentiles (usec): 00:43:24.611 | 1.00th=[18482], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:43:24.611 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:43:24.611 | 70.00th=[26870], 80.00th=[27657], 90.00th=[28443], 95.00th=[29230], 00:43:24.611 | 99.00th=[30540], 99.50th=[30802], 99.90th=[30802], 99.95th=[31065], 00:43:24.611 | 99.99th=[31065] 00:43:24.611 bw ( KiB/s): min= 2176, max= 2560, per=4.19%, avg=2405.05, stdev=117.46, samples=19 00:43:24.611 iops : min= 544, max= 640, avg=601.26, stdev=29.37, samples=19 00:43:24.611 lat (msec) : 20=1.07%, 50=98.93% 00:43:24.611 cpu : usr=98.79%, sys=0.73%, ctx=28, majf=0, minf=29 00:43:24.611 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:24.611 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.611 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.611 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.611 filename2: (groupid=0, jobs=1): err= 0: pid=640318: Sun Dec 15 05:43:36 2024 00:43:24.611 read: IOPS=598, BW=2392KiB/s (2450kB/s)(23.4MiB/10006msec) 00:43:24.611 slat (nsec): min=7469, max=92743, avg=35289.88, stdev=18740.74 00:43:24.611 clat (usec): min=8134, max=55506, avg=26444.40, stdev=2442.69 00:43:24.611 lat (usec): min=8145, max=55523, avg=26479.69, stdev=2444.41 00:43:24.611 clat percentiles (usec): 00:43:24.611 | 1.00th=[23987], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:43:24.611 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:43:24.611 | 70.00th=[27132], 80.00th=[27657], 90.00th=[28705], 95.00th=[29230], 00:43:24.611 | 99.00th=[30540], 99.50th=[30802], 99.90th=[55313], 99.95th=[55313], 00:43:24.612 | 99.99th=[55313] 00:43:24.612 bw ( KiB/s): min= 2171, max= 2560, per=4.15%, avg=2384.32, stdev=130.12, samples=19 00:43:24.612 iops : min= 542, max= 640, avg=596.00, stdev=32.63, samples=19 00:43:24.612 lat (msec) : 10=0.27%, 20=0.53%, 50=98.93%, 100=0.27% 00:43:24.612 cpu : usr=98.31%, sys=1.16%, ctx=102, majf=0, minf=37 00:43:24.612 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:24.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 issued rwts: total=5984,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.612 filename2: (groupid=0, jobs=1): err= 0: pid=640319: Sun Dec 15 05:43:36 2024 00:43:24.612 read: IOPS=600, BW=2403KiB/s (2460kB/s)(23.5MiB/10016msec) 00:43:24.612 slat (nsec): min=6731, max=80336, avg=26280.20, stdev=14269.23 00:43:24.612 clat (usec): min=10803, max=31188, avg=26434.33, stdev=2012.37 00:43:24.612 lat (usec): min=10820, max=31212, avg=26460.61, stdev=2012.65 00:43:24.612 clat percentiles (usec): 00:43:24.612 | 1.00th=[17957], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:43:24.612 | 30.00th=[25297], 40.00th=[26084], 50.00th=[26608], 60.00th=[26870], 00:43:24.612 | 70.00th=[27132], 80.00th=[27919], 90.00th=[28705], 95.00th=[29230], 00:43:24.612 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:43:24.612 | 99.99th=[31065] 00:43:24.612 bw ( KiB/s): min= 2299, max= 2560, per=4.18%, avg=2399.50, stdev=116.98, samples=20 00:43:24.612 iops : min= 574, max= 640, avg=599.80, stdev=29.31, samples=20 00:43:24.612 lat (msec) : 20=1.33%, 50=98.67% 00:43:24.612 cpu : usr=98.56%, sys=0.98%, ctx=122, majf=0, minf=39 00:43:24.612 IO depths : 1=6.2%, 2=12.5%, 4=24.9%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:43:24.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.612 filename2: (groupid=0, jobs=1): err= 0: pid=640320: Sun Dec 15 05:43:36 2024 00:43:24.612 read: IOPS=599, BW=2398KiB/s (2455kB/s)(23.5MiB/10019msec) 00:43:24.612 slat (nsec): min=4984, max=76750, avg=22166.98, stdev=12360.81 00:43:24.612 clat (usec): min=12077, max=41858, avg=26513.01, stdev=1991.37 00:43:24.612 lat (usec): min=12088, max=41878, avg=26535.18, stdev=1989.50 00:43:24.612 clat percentiles (usec): 00:43:24.612 | 1.00th=[18744], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:43:24.612 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[26870], 00:43:24.612 | 70.00th=[27132], 80.00th=[27919], 90.00th=[28705], 95.00th=[29230], 00:43:24.612 | 99.00th=[30802], 99.50th=[31065], 99.90th=[34341], 99.95th=[41157], 00:43:24.612 | 99.99th=[41681] 00:43:24.612 bw ( KiB/s): min= 2176, max= 2560, per=4.18%, avg=2400.32, stdev=117.50, samples=19 00:43:24.612 iops : min= 544, max= 640, avg=600.00, stdev=29.45, samples=19 00:43:24.612 lat (msec) : 20=1.17%, 50=98.83% 00:43:24.612 cpu : usr=98.37%, sys=1.21%, ctx=49, majf=0, minf=39 00:43:24.612 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:43:24.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 issued rwts: total=6006,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.612 filename2: (groupid=0, jobs=1): err= 0: pid=640321: Sun Dec 15 05:43:36 2024 00:43:24.612 read: IOPS=596, BW=2386KiB/s (2444kB/s)(23.3MiB/10003msec) 00:43:24.612 slat (usec): min=4, max=164, avg=40.95, stdev=19.60 00:43:24.612 clat (usec): min=13081, max=42033, avg=26447.41, stdev=1881.07 00:43:24.612 lat (usec): min=13090, max=42051, avg=26488.36, stdev=1884.87 00:43:24.612 clat percentiles (usec): 00:43:24.612 | 1.00th=[24249], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:43:24.612 | 30.00th=[25035], 40.00th=[26084], 50.00th=[26346], 60.00th=[26608], 00:43:24.612 | 70.00th=[27132], 80.00th=[27657], 90.00th=[28705], 95.00th=[29230], 00:43:24.612 | 99.00th=[30540], 99.50th=[31065], 99.90th=[41681], 99.95th=[42206], 00:43:24.612 | 99.99th=[42206] 00:43:24.612 bw ( KiB/s): min= 2299, max= 2560, per=4.15%, avg=2384.32, stdev=97.87, samples=19 00:43:24.612 iops : min= 574, max= 640, avg=596.00, stdev=24.54, samples=19 00:43:24.612 lat (msec) : 20=0.47%, 50=99.53% 00:43:24.612 cpu : usr=98.82%, sys=0.73%, ctx=44, majf=0, minf=43 00:43:24.612 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:43:24.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 issued rwts: total=5968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.612 filename2: (groupid=0, jobs=1): err= 0: pid=640322: Sun Dec 15 05:43:36 2024 00:43:24.612 read: IOPS=599, BW=2399KiB/s (2456kB/s)(23.4MiB/10005msec) 00:43:24.612 slat (nsec): min=10830, max=98515, avg=41690.63, stdev=17145.90 00:43:24.612 clat (usec): min=10765, max=31152, avg=26339.11, stdev=1863.61 00:43:24.612 lat (usec): min=10782, max=31168, avg=26380.80, stdev=1866.08 00:43:24.612 clat percentiles (usec): 00:43:24.612 | 1.00th=[18482], 5.00th=[24511], 10.00th=[24773], 20.00th=[25035], 00:43:24.612 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26346], 60.00th=[26608], 00:43:24.612 | 70.00th=[27132], 80.00th=[27657], 90.00th=[28705], 95.00th=[29230], 00:43:24.612 | 99.00th=[30540], 99.50th=[30802], 99.90th=[31065], 99.95th=[31065], 00:43:24.612 | 99.99th=[31065] 00:43:24.612 bw ( KiB/s): min= 2176, max= 2560, per=4.19%, avg=2405.05, stdev=117.46, samples=19 00:43:24.612 iops : min= 544, max= 640, avg=601.26, stdev=29.37, samples=19 00:43:24.612 lat (msec) : 20=1.07%, 50=98.93% 00:43:24.612 cpu : usr=98.56%, sys=0.91%, ctx=77, majf=0, minf=50 00:43:24.612 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:43:24.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 issued rwts: total=6000,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.612 filename2: (groupid=0, jobs=1): err= 0: pid=640323: Sun Dec 15 05:43:36 2024 00:43:24.612 read: IOPS=620, BW=2481KiB/s (2540kB/s)(24.2MiB/10004msec) 00:43:24.612 slat (nsec): min=6647, max=87327, avg=39297.43, stdev=18502.89 00:43:24.612 clat (usec): min=8437, max=54401, avg=25468.56, stdev=4058.44 00:43:24.612 lat (usec): min=8446, max=54443, avg=25507.86, stdev=4066.79 00:43:24.612 clat percentiles (usec): 00:43:24.612 | 1.00th=[14615], 5.00th=[16712], 10.00th=[19530], 20.00th=[24511], 00:43:24.612 | 30.00th=[25035], 40.00th=[25297], 50.00th=[26084], 60.00th=[26608], 00:43:24.612 | 70.00th=[26870], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:43:24.612 | 99.00th=[35390], 99.50th=[42206], 99.90th=[54264], 99.95th=[54264], 00:43:24.612 | 99.99th=[54264] 00:43:24.612 bw ( KiB/s): min= 2171, max= 3248, per=4.28%, avg=2454.42, stdev=233.16, samples=19 00:43:24.612 iops : min= 542, max= 812, avg=613.53, stdev=58.39, samples=19 00:43:24.612 lat (msec) : 10=0.35%, 20=10.24%, 50=89.15%, 100=0.26% 00:43:24.612 cpu : usr=98.00%, sys=1.22%, ctx=145, majf=0, minf=59 00:43:24.612 IO depths : 1=4.6%, 2=9.5%, 4=20.8%, 8=56.9%, 16=8.2%, 32=0.0%, >=64=0.0% 00:43:24.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 complete : 0=0.0%, 4=93.0%, 8=1.5%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 issued rwts: total=6204,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.612 filename2: (groupid=0, jobs=1): err= 0: pid=640324: Sun Dec 15 05:43:36 2024 00:43:24.612 read: IOPS=599, BW=2397KiB/s (2455kB/s)(23.5MiB/10045msec) 00:43:24.612 slat (nsec): min=6662, max=87277, avg=38710.03, stdev=16275.77 00:43:24.612 clat (usec): min=8339, max=57859, avg=26360.32, stdev=3115.32 00:43:24.612 lat (usec): min=8350, max=57874, avg=26399.03, stdev=3116.02 00:43:24.612 clat percentiles (usec): 00:43:24.612 | 1.00th=[16450], 5.00th=[23987], 10.00th=[24511], 20.00th=[24773], 00:43:24.612 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26870], 00:43:24.612 | 70.00th=[27132], 80.00th=[27919], 90.00th=[28705], 95.00th=[29754], 00:43:24.612 | 99.00th=[35914], 99.50th=[41157], 99.90th=[54264], 99.95th=[54264], 00:43:24.612 | 99.99th=[57934] 00:43:24.612 bw ( KiB/s): min= 2171, max= 2608, per=4.19%, avg=2402.90, stdev=120.78, samples=20 00:43:24.612 iops : min= 542, max= 652, avg=600.65, stdev=30.30, samples=20 00:43:24.612 lat (msec) : 10=0.17%, 20=2.19%, 50=97.28%, 100=0.37% 00:43:24.612 cpu : usr=98.44%, sys=1.03%, ctx=84, majf=0, minf=31 00:43:24.612 IO depths : 1=4.9%, 2=10.5%, 4=22.6%, 8=54.1%, 16=7.9%, 32=0.0%, >=64=0.0% 00:43:24.612 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 complete : 0=0.0%, 4=93.6%, 8=1.0%, 16=5.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:24.612 issued rwts: total=6020,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:24.612 latency : target=0, window=0, percentile=100.00%, depth=16 00:43:24.612 00:43:24.612 Run status group 0 (all jobs): 00:43:24.612 READ: bw=56.0MiB/s (58.8MB/s), 2386KiB/s-2481KiB/s (2444kB/s-2540kB/s), io=563MiB (590MB), run=10003-10045msec 00:43:24.612 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:43:24.612 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:24.612 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:24.612 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:24.612 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:24.612 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 bdev_null0 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 [2024-12-15 05:43:37.135571] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 bdev_null1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # config=() 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@560 -- # local subsystem config 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:24.613 { 00:43:24.613 "params": { 00:43:24.613 "name": "Nvme$subsystem", 00:43:24.613 "trtype": "$TEST_TRANSPORT", 00:43:24.613 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:24.613 "adrfam": "ipv4", 00:43:24.613 "trsvcid": "$NVMF_PORT", 00:43:24.613 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:24.613 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:24.613 "hdgst": ${hdgst:-false}, 00:43:24.613 "ddgst": ${ddgst:-false} 00:43:24.613 }, 00:43:24.613 "method": "bdev_nvme_attach_controller" 00:43:24.613 } 00:43:24.613 EOF 00:43:24.613 )") 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:24.613 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:24.614 { 00:43:24.614 "params": { 00:43:24.614 "name": "Nvme$subsystem", 00:43:24.614 "trtype": "$TEST_TRANSPORT", 00:43:24.614 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:24.614 "adrfam": "ipv4", 00:43:24.614 "trsvcid": "$NVMF_PORT", 00:43:24.614 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:24.614 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:24.614 "hdgst": ${hdgst:-false}, 00:43:24.614 "ddgst": ${ddgst:-false} 00:43:24.614 }, 00:43:24.614 "method": "bdev_nvme_attach_controller" 00:43:24.614 } 00:43:24.614 EOF 00:43:24.614 )") 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@582 -- # cat 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@584 -- # jq . 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@585 -- # IFS=, 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:24.614 "params": { 00:43:24.614 "name": "Nvme0", 00:43:24.614 "trtype": "tcp", 00:43:24.614 "traddr": "10.0.0.2", 00:43:24.614 "adrfam": "ipv4", 00:43:24.614 "trsvcid": "4420", 00:43:24.614 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:24.614 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:24.614 "hdgst": false, 00:43:24.614 "ddgst": false 00:43:24.614 }, 00:43:24.614 "method": "bdev_nvme_attach_controller" 00:43:24.614 },{ 00:43:24.614 "params": { 00:43:24.614 "name": "Nvme1", 00:43:24.614 "trtype": "tcp", 00:43:24.614 "traddr": "10.0.0.2", 00:43:24.614 "adrfam": "ipv4", 00:43:24.614 "trsvcid": "4420", 00:43:24.614 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:43:24.614 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:43:24.614 "hdgst": false, 00:43:24.614 "ddgst": false 00:43:24.614 }, 00:43:24.614 "method": "bdev_nvme_attach_controller" 00:43:24.614 }' 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:24.614 05:43:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:24.614 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:24.614 ... 00:43:24.614 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:43:24.614 ... 00:43:24.614 fio-3.35 00:43:24.614 Starting 4 threads 00:43:29.882 00:43:29.882 filename0: (groupid=0, jobs=1): err= 0: pid=642218: Sun Dec 15 05:43:43 2024 00:43:29.882 read: IOPS=2664, BW=20.8MiB/s (21.8MB/s)(104MiB/5002msec) 00:43:29.882 slat (nsec): min=5954, max=77223, avg=17646.35, stdev=11552.30 00:43:29.882 clat (usec): min=583, max=5836, avg=2940.74, stdev=405.74 00:43:29.882 lat (usec): min=606, max=5847, avg=2958.39, stdev=407.81 00:43:29.882 clat percentiles (usec): 00:43:29.882 | 1.00th=[ 1762], 5.00th=[ 2245], 10.00th=[ 2442], 20.00th=[ 2671], 00:43:29.882 | 30.00th=[ 2835], 40.00th=[ 2900], 50.00th=[ 2966], 60.00th=[ 3032], 00:43:29.882 | 70.00th=[ 3130], 80.00th=[ 3195], 90.00th=[ 3326], 95.00th=[ 3458], 00:43:29.882 | 99.00th=[ 4047], 99.50th=[ 4424], 99.90th=[ 5407], 99.95th=[ 5669], 00:43:29.882 | 99.99th=[ 5800] 00:43:29.882 bw ( KiB/s): min=20071, max=23760, per=25.88%, avg=21453.22, stdev=1237.00, samples=9 00:43:29.882 iops : min= 2508, max= 2970, avg=2681.56, stdev=154.75, samples=9 00:43:29.882 lat (usec) : 750=0.02%, 1000=0.04% 00:43:29.882 lat (msec) : 2=1.53%, 4=97.31%, 10=1.10% 00:43:29.882 cpu : usr=96.60%, sys=3.02%, ctx=13, majf=0, minf=9 00:43:29.882 IO depths : 1=1.0%, 2=13.7%, 4=58.7%, 8=26.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:29.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.882 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.882 issued rwts: total=13327,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:29.882 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:29.882 filename0: (groupid=0, jobs=1): err= 0: pid=642219: Sun Dec 15 05:43:43 2024 00:43:29.882 read: IOPS=2558, BW=20.0MiB/s (21.0MB/s)(100.0MiB/5001msec) 00:43:29.882 slat (usec): min=6, max=251, avg=15.20, stdev= 9.95 00:43:29.882 clat (usec): min=495, max=6373, avg=3078.15, stdev=378.26 00:43:29.882 lat (usec): min=519, max=6384, avg=3093.36, stdev=378.42 00:43:29.882 clat percentiles (usec): 00:43:29.882 | 1.00th=[ 2089], 5.00th=[ 2540], 10.00th=[ 2737], 20.00th=[ 2868], 00:43:29.882 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3130], 00:43:29.882 | 70.00th=[ 3195], 80.00th=[ 3261], 90.00th=[ 3425], 95.00th=[ 3654], 00:43:29.882 | 99.00th=[ 4424], 99.50th=[ 4686], 99.90th=[ 5342], 99.95th=[ 5473], 00:43:29.882 | 99.99th=[ 6390] 00:43:29.882 bw ( KiB/s): min=19888, max=21136, per=24.72%, avg=20494.22, stdev=490.19, samples=9 00:43:29.882 iops : min= 2486, max= 2642, avg=2561.78, stdev=61.27, samples=9 00:43:29.882 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.02% 00:43:29.882 lat (msec) : 2=0.68%, 4=97.02%, 10=2.25% 00:43:29.882 cpu : usr=97.00%, sys=2.66%, ctx=9, majf=0, minf=9 00:43:29.882 IO depths : 1=1.6%, 2=7.6%, 4=64.9%, 8=25.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:29.882 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.882 complete : 0=0.0%, 4=91.9%, 8=8.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.882 issued rwts: total=12795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:29.882 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:29.882 filename1: (groupid=0, jobs=1): err= 0: pid=642220: Sun Dec 15 05:43:43 2024 00:43:29.882 read: IOPS=2534, BW=19.8MiB/s (20.8MB/s)(99.0MiB/5002msec) 00:43:29.882 slat (nsec): min=6243, max=70475, avg=16576.13, stdev=10624.09 00:43:29.882 clat (usec): min=610, max=6069, avg=3101.29, stdev=419.92 00:43:29.882 lat (usec): min=647, max=6077, avg=3117.86, stdev=419.59 00:43:29.882 clat percentiles (usec): 00:43:29.883 | 1.00th=[ 2114], 5.00th=[ 2540], 10.00th=[ 2737], 20.00th=[ 2900], 00:43:29.883 | 30.00th=[ 2933], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3130], 00:43:29.883 | 70.00th=[ 3195], 80.00th=[ 3261], 90.00th=[ 3490], 95.00th=[ 3785], 00:43:29.883 | 99.00th=[ 4817], 99.50th=[ 5080], 99.90th=[ 5473], 99.95th=[ 5604], 00:43:29.883 | 99.99th=[ 5735] 00:43:29.883 bw ( KiB/s): min=19433, max=21280, per=24.50%, avg=20312.11, stdev=602.50, samples=9 00:43:29.883 iops : min= 2429, max= 2660, avg=2539.00, stdev=75.34, samples=9 00:43:29.883 lat (usec) : 750=0.02%, 1000=0.02% 00:43:29.883 lat (msec) : 2=0.76%, 4=95.76%, 10=3.44% 00:43:29.883 cpu : usr=97.18%, sys=2.48%, ctx=12, majf=0, minf=9 00:43:29.883 IO depths : 1=0.6%, 2=11.2%, 4=61.7%, 8=26.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:29.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.883 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.883 issued rwts: total=12678,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:29.883 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:29.883 filename1: (groupid=0, jobs=1): err= 0: pid=642221: Sun Dec 15 05:43:43 2024 00:43:29.883 read: IOPS=2604, BW=20.3MiB/s (21.3MB/s)(102MiB/5002msec) 00:43:29.883 slat (nsec): min=6318, max=70510, avg=15970.17, stdev=10380.25 00:43:29.883 clat (usec): min=596, max=5727, avg=3014.01, stdev=386.73 00:43:29.883 lat (usec): min=605, max=5741, avg=3029.98, stdev=387.11 00:43:29.883 clat percentiles (usec): 00:43:29.883 | 1.00th=[ 1909], 5.00th=[ 2376], 10.00th=[ 2606], 20.00th=[ 2802], 00:43:29.883 | 30.00th=[ 2900], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:43:29.883 | 70.00th=[ 3163], 80.00th=[ 3228], 90.00th=[ 3359], 95.00th=[ 3556], 00:43:29.883 | 99.00th=[ 4178], 99.50th=[ 4555], 99.90th=[ 5211], 99.95th=[ 5538], 00:43:29.883 | 99.99th=[ 5669] 00:43:29.883 bw ( KiB/s): min=19728, max=22768, per=25.15%, avg=20843.56, stdev=924.02, samples=9 00:43:29.883 iops : min= 2466, max= 2846, avg=2605.44, stdev=115.50, samples=9 00:43:29.883 lat (usec) : 750=0.03%, 1000=0.05% 00:43:29.883 lat (msec) : 2=1.20%, 4=97.25%, 10=1.47% 00:43:29.883 cpu : usr=96.86%, sys=2.72%, ctx=27, majf=0, minf=10 00:43:29.883 IO depths : 1=2.1%, 2=15.3%, 4=58.6%, 8=24.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:29.883 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.883 complete : 0=0.0%, 4=90.7%, 8=9.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:29.883 issued rwts: total=13027,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:29.883 latency : target=0, window=0, percentile=100.00%, depth=8 00:43:29.883 00:43:29.883 Run status group 0 (all jobs): 00:43:29.883 READ: bw=80.9MiB/s (84.9MB/s), 19.8MiB/s-20.8MiB/s (20.8MB/s-21.8MB/s), io=405MiB (425MB), run=5001-5002msec 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:29.883 00:43:29.883 real 0m24.481s 00:43:29.883 user 4m52.428s 00:43:29.883 sys 0m4.594s 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:29.883 05:43:43 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:43:29.883 ************************************ 00:43:29.883 END TEST fio_dif_rand_params 00:43:29.883 ************************************ 00:43:30.186 05:43:43 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:43:30.186 05:43:43 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:30.186 05:43:43 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:30.186 05:43:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:30.186 ************************************ 00:43:30.186 START TEST fio_dif_digest 00:43:30.186 ************************************ 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:30.186 bdev_null0 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:30.186 [2024-12-15 05:43:43.660893] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # config=() 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@560 -- # local subsystem config 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@562 -- # for subsystem in "${@:-1}" 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # config+=("$(cat <<-EOF 00:43:30.186 { 00:43:30.186 "params": { 00:43:30.186 "name": "Nvme$subsystem", 00:43:30.186 "trtype": "$TEST_TRANSPORT", 00:43:30.186 "traddr": "$NVMF_FIRST_TARGET_IP", 00:43:30.186 "adrfam": "ipv4", 00:43:30.186 "trsvcid": "$NVMF_PORT", 00:43:30.186 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:43:30.186 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:43:30.186 "hdgst": ${hdgst:-false}, 00:43:30.186 "ddgst": ${ddgst:-false} 00:43:30.186 }, 00:43:30.186 "method": "bdev_nvme_attach_controller" 00:43:30.186 } 00:43:30.186 EOF 00:43:30.186 )") 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@582 -- # cat 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@584 -- # jq . 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@585 -- # IFS=, 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- nvmf/common.sh@586 -- # printf '%s\n' '{ 00:43:30.186 "params": { 00:43:30.186 "name": "Nvme0", 00:43:30.186 "trtype": "tcp", 00:43:30.186 "traddr": "10.0.0.2", 00:43:30.186 "adrfam": "ipv4", 00:43:30.186 "trsvcid": "4420", 00:43:30.186 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:43:30.186 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:43:30.186 "hdgst": true, 00:43:30.186 "ddgst": true 00:43:30.186 }, 00:43:30.186 "method": "bdev_nvme_attach_controller" 00:43:30.186 }' 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:30.186 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:30.187 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:43:30.187 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:43:30.187 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:43:30.187 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:43:30.187 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:43:30.187 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:43:30.187 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:43:30.187 05:43:43 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:43:30.498 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:43:30.498 ... 00:43:30.498 fio-3.35 00:43:30.498 Starting 3 threads 00:43:42.749 00:43:42.749 filename0: (groupid=0, jobs=1): err= 0: pid=643419: Sun Dec 15 05:43:54 2024 00:43:42.749 read: IOPS=288, BW=36.1MiB/s (37.9MB/s)(363MiB/10046msec) 00:43:42.749 slat (nsec): min=6469, max=35450, avg=11477.55, stdev=1906.68 00:43:42.749 clat (usec): min=6789, max=51935, avg=10356.45, stdev=1835.46 00:43:42.749 lat (usec): min=6800, max=51970, avg=10367.93, stdev=1835.59 00:43:42.749 clat percentiles (usec): 00:43:42.749 | 1.00th=[ 8356], 5.00th=[ 8979], 10.00th=[ 9372], 20.00th=[ 9765], 00:43:42.749 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10552], 00:43:42.749 | 70.00th=[10683], 80.00th=[10945], 90.00th=[11207], 95.00th=[11469], 00:43:42.749 | 99.00th=[12256], 99.50th=[12649], 99.90th=[52167], 99.95th=[52167], 00:43:42.749 | 99.99th=[52167] 00:43:42.749 bw ( KiB/s): min=33024, max=37888, per=35.26%, avg=37120.00, stdev=1040.71, samples=20 00:43:42.749 iops : min= 258, max= 296, avg=290.00, stdev= 8.13, samples=20 00:43:42.749 lat (msec) : 10=33.18%, 20=66.64%, 50=0.03%, 100=0.14% 00:43:42.749 cpu : usr=94.43%, sys=5.29%, ctx=19, majf=0, minf=64 00:43:42.749 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:42.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.749 issued rwts: total=2902,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:42.749 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:42.749 filename0: (groupid=0, jobs=1): err= 0: pid=643420: Sun Dec 15 05:43:54 2024 00:43:42.749 read: IOPS=263, BW=32.9MiB/s (34.5MB/s)(330MiB/10044msec) 00:43:42.749 slat (nsec): min=6548, max=39031, avg=11566.40, stdev=1836.51 00:43:42.749 clat (usec): min=6583, max=48820, avg=11375.28, stdev=1318.84 00:43:42.749 lat (usec): min=6594, max=48832, avg=11386.84, stdev=1318.80 00:43:42.749 clat percentiles (usec): 00:43:42.749 | 1.00th=[ 8356], 5.00th=[10028], 10.00th=[10421], 20.00th=[10683], 00:43:42.749 | 30.00th=[10945], 40.00th=[11207], 50.00th=[11338], 60.00th=[11600], 00:43:42.749 | 70.00th=[11731], 80.00th=[11994], 90.00th=[12387], 95.00th=[12649], 00:43:42.749 | 99.00th=[13304], 99.50th=[13435], 99.90th=[14615], 99.95th=[46924], 00:43:42.749 | 99.99th=[49021] 00:43:42.749 bw ( KiB/s): min=33280, max=35072, per=32.10%, avg=33792.00, stdev=447.28, samples=20 00:43:42.749 iops : min= 260, max= 274, avg=264.00, stdev= 3.49, samples=20 00:43:42.749 lat (msec) : 10=4.96%, 20=94.97%, 50=0.08% 00:43:42.749 cpu : usr=94.65%, sys=5.05%, ctx=18, majf=0, minf=105 00:43:42.749 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:42.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.749 issued rwts: total=2642,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:42.749 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:42.749 filename0: (groupid=0, jobs=1): err= 0: pid=643421: Sun Dec 15 05:43:54 2024 00:43:42.749 read: IOPS=270, BW=33.8MiB/s (35.5MB/s)(340MiB/10045msec) 00:43:42.749 slat (nsec): min=6469, max=31910, avg=11599.31, stdev=1817.11 00:43:42.749 clat (usec): min=6578, max=51986, avg=11054.85, stdev=1828.81 00:43:42.749 lat (usec): min=6591, max=51995, avg=11066.45, stdev=1828.72 00:43:42.749 clat percentiles (usec): 00:43:42.749 | 1.00th=[ 9241], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10421], 00:43:42.749 | 30.00th=[10552], 40.00th=[10814], 50.00th=[10945], 60.00th=[11207], 00:43:42.749 | 70.00th=[11338], 80.00th=[11600], 90.00th=[11994], 95.00th=[12256], 00:43:42.749 | 99.00th=[12911], 99.50th=[13304], 99.90th=[50594], 99.95th=[51643], 00:43:42.749 | 99.99th=[52167] 00:43:42.749 bw ( KiB/s): min=32512, max=36096, per=33.03%, avg=34777.60, stdev=739.51, samples=20 00:43:42.749 iops : min= 254, max= 282, avg=271.70, stdev= 5.78, samples=20 00:43:42.749 lat (msec) : 10=8.86%, 20=90.95%, 50=0.07%, 100=0.11% 00:43:42.749 cpu : usr=94.91%, sys=4.80%, ctx=18, majf=0, minf=81 00:43:42.749 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:43:42.749 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.749 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:43:42.749 issued rwts: total=2719,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:43:42.749 latency : target=0, window=0, percentile=100.00%, depth=3 00:43:42.749 00:43:42.749 Run status group 0 (all jobs): 00:43:42.749 READ: bw=103MiB/s (108MB/s), 32.9MiB/s-36.1MiB/s (34.5MB/s-37.9MB/s), io=1033MiB (1083MB), run=10044-10046msec 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:43:42.749 00:43:42.749 real 0m11.183s 00:43:42.749 user 0m35.548s 00:43:42.749 sys 0m1.877s 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:42.749 05:43:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:43:42.750 ************************************ 00:43:42.750 END TEST fio_dif_digest 00:43:42.750 ************************************ 00:43:42.750 05:43:54 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:43:42.750 05:43:54 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:43:42.750 05:43:54 nvmf_dif -- nvmf/common.sh@516 -- # nvmfcleanup 00:43:42.750 05:43:54 nvmf_dif -- nvmf/common.sh@121 -- # sync 00:43:42.750 05:43:54 nvmf_dif -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:43:42.750 05:43:54 nvmf_dif -- nvmf/common.sh@124 -- # set +e 00:43:42.750 05:43:54 nvmf_dif -- nvmf/common.sh@125 -- # for i in {1..20} 00:43:42.750 05:43:54 nvmf_dif -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:43:42.750 rmmod nvme_tcp 00:43:42.750 rmmod nvme_fabrics 00:43:42.750 rmmod nvme_keyring 00:43:42.750 05:43:54 nvmf_dif -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:43:42.750 05:43:54 nvmf_dif -- nvmf/common.sh@128 -- # set -e 00:43:42.750 05:43:54 nvmf_dif -- nvmf/common.sh@129 -- # return 0 00:43:42.750 05:43:54 nvmf_dif -- nvmf/common.sh@517 -- # '[' -n 635079 ']' 00:43:42.750 05:43:54 nvmf_dif -- nvmf/common.sh@518 -- # killprocess 635079 00:43:42.750 05:43:54 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 635079 ']' 00:43:42.750 05:43:54 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 635079 00:43:42.750 05:43:54 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:43:42.750 05:43:54 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:43:42.750 05:43:54 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 635079 00:43:42.750 05:43:54 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:43:42.750 05:43:54 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:43:42.750 05:43:54 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 635079' 00:43:42.750 killing process with pid 635079 00:43:42.750 05:43:54 nvmf_dif -- common/autotest_common.sh@973 -- # kill 635079 00:43:42.750 05:43:54 nvmf_dif -- common/autotest_common.sh@978 -- # wait 635079 00:43:42.750 05:43:55 nvmf_dif -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:43:42.750 05:43:55 nvmf_dif -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:43:44.128 Waiting for block devices as requested 00:43:44.128 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:43:44.388 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:44.388 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:44.646 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:44.646 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:44.646 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:44.646 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:44.905 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:44.905 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:44.905 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:43:45.164 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:43:45.164 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:43:45.164 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:43:45.164 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:43:45.424 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:43:45.424 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:43:45.424 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:43:45.424 05:43:59 nvmf_dif -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:43:45.424 05:43:59 nvmf_dif -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:43:45.424 05:43:59 nvmf_dif -- nvmf/common.sh@297 -- # iptr 00:43:45.424 05:43:59 nvmf_dif -- nvmf/common.sh@791 -- # iptables-save 00:43:45.424 05:43:59 nvmf_dif -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:43:45.424 05:43:59 nvmf_dif -- nvmf/common.sh@791 -- # iptables-restore 00:43:45.683 05:43:59 nvmf_dif -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:43:45.683 05:43:59 nvmf_dif -- nvmf/common.sh@302 -- # remove_spdk_ns 00:43:45.683 05:43:59 nvmf_dif -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:45.683 05:43:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:45.683 05:43:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:47.589 05:44:01 nvmf_dif -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:43:47.589 00:43:47.589 real 1m14.312s 00:43:47.589 user 7m10.592s 00:43:47.589 sys 0m20.469s 00:43:47.589 05:44:01 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:43:47.589 05:44:01 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:43:47.589 ************************************ 00:43:47.589 END TEST nvmf_dif 00:43:47.589 ************************************ 00:43:47.589 05:44:01 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:47.589 05:44:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:47.589 05:44:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:47.589 05:44:01 -- common/autotest_common.sh@10 -- # set +x 00:43:47.589 ************************************ 00:43:47.589 START TEST nvmf_abort_qd_sizes 00:43:47.589 ************************************ 00:43:47.589 05:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:43:47.849 * Looking for test storage... 00:43:47.849 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lcov --version 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:43:47.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:47.849 --rc genhtml_branch_coverage=1 00:43:47.849 --rc genhtml_function_coverage=1 00:43:47.849 --rc genhtml_legend=1 00:43:47.849 --rc geninfo_all_blocks=1 00:43:47.849 --rc geninfo_unexecuted_blocks=1 00:43:47.849 00:43:47.849 ' 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:43:47.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:47.849 --rc genhtml_branch_coverage=1 00:43:47.849 --rc genhtml_function_coverage=1 00:43:47.849 --rc genhtml_legend=1 00:43:47.849 --rc geninfo_all_blocks=1 00:43:47.849 --rc geninfo_unexecuted_blocks=1 00:43:47.849 00:43:47.849 ' 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:43:47.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:47.849 --rc genhtml_branch_coverage=1 00:43:47.849 --rc genhtml_function_coverage=1 00:43:47.849 --rc genhtml_legend=1 00:43:47.849 --rc geninfo_all_blocks=1 00:43:47.849 --rc geninfo_unexecuted_blocks=1 00:43:47.849 00:43:47.849 ' 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:43:47.849 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:43:47.849 --rc genhtml_branch_coverage=1 00:43:47.849 --rc genhtml_function_coverage=1 00:43:47.849 --rc genhtml_legend=1 00:43:47.849 --rc geninfo_all_blocks=1 00:43:47.849 --rc geninfo_unexecuted_blocks=1 00:43:47.849 00:43:47.849 ' 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:43:47.849 05:44:01 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # : 0 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:43:47.850 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@55 -- # have_pci_nics=0 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@469 -- # '[' -z tcp ']' 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@476 -- # prepare_net_devs 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@438 -- # local -g is_hw=no 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@440 -- # remove_spdk_ns 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # [[ phy != virt ]] 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # gather_supported_nvmf_pci_devs 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # xtrace_disable 00:43:47.850 05:44:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@313 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # pci_devs=() 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # local -a pci_devs 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # pci_net_devs=() 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@316 -- # local -a pci_net_devs 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # pci_drivers=() 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # local -A pci_drivers 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # net_devs=() 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@319 -- # local -ga net_devs 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # e810=() 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # local -ga e810 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # x722=() 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # local -ga x722 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # mlx=() 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@322 -- # local -ga mlx 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@325 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@332 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@334 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@338 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@343 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@344 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # pci_devs+=("${e810[@]}") 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@347 -- # [[ tcp == rdma ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@353 -- # [[ e810 == mlx5 ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@355 -- # [[ e810 == e810 ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@356 -- # pci_devs=("${e810[@]}") 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@361 -- # (( 2 == 0 )) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.0 (0x8086 - 0x159b)' 00:43:54.420 Found 0000:af:00.0 (0x8086 - 0x159b) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # for pci in "${pci_devs[@]}" 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@367 -- # echo 'Found 0000:af:00.1 (0x8086 - 0x159b)' 00:43:54.420 Found 0000:af:00.1 (0x8086 - 0x159b) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@368 -- # [[ ice == unknown ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ ice == unbound ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@376 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@377 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@378 -- # [[ tcp == rdma ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@392 -- # (( 0 > 0 )) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ e810 == e810 ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@398 -- # [[ tcp == rdma ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.0: cvl_0_0' 00:43:54.420 Found net devices under 0000:af:00.0: cvl_0_0 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # for pci in "${pci_devs[@]}" 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@411 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ tcp == tcp ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # for net_dev in "${!pci_net_devs[@]}" 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # [[ up == up ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # (( 1 == 0 )) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@427 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@428 -- # echo 'Found net devices under 0000:af:00.1: cvl_0_1' 00:43:54.420 Found net devices under 0000:af:00.1: cvl_0_1 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@429 -- # net_devs+=("${pci_net_devs[@]}") 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@432 -- # (( 2 == 0 )) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@442 -- # is_hw=yes 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@444 -- # [[ yes == yes ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@445 -- # [[ tcp == tcp ]] 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # nvmf_tcp_init 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@250 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # NVMF_INITIATOR_IP=10.0.0.1 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@253 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@256 -- # (( 2 > 1 )) 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@259 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # NVMF_SECOND_TARGET_IP= 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@263 -- # NVMF_SECOND_INITIATOR_IP= 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ip -4 addr flush cvl_0_0 00:43:54.420 05:44:06 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip -4 addr flush cvl_0_1 00:43:54.420 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@271 -- # ip netns add cvl_0_0_ns_spdk 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@277 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@281 -- # ip link set cvl_0_1 up 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@283 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@284 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@287 -- # ipts -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@790 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT' 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@290 -- # ping -c 1 10.0.0.2 00:43:54.421 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:43:54.421 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.399 ms 00:43:54.421 00:43:54.421 --- 10.0.0.2 ping statistics --- 00:43:54.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:54.421 rtt min/avg/max/mdev = 0.399/0.399/0.399/0.000 ms 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:43:54.421 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:43:54.421 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.200 ms 00:43:54.421 00:43:54.421 --- 10.0.0.1 ping statistics --- 00:43:54.421 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:43:54.421 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # return 0 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@478 -- # '[' iso == iso ']' 00:43:54.421 05:44:07 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:43:56.955 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:43:56.955 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:43:57.522 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # [[ tcp == \r\d\m\a ]] 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # [[ tcp == \t\c\p ]] 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # '[' tcp == tcp ']' 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@502 -- # modprobe nvme-tcp 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@507 -- # timing_enter start_nvmf_tgt 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@509 -- # nvmfpid=651120 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@508 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@510 -- # waitforlisten 651120 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 651120 ']' 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:43:57.522 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:43:57.522 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:57.779 [2024-12-15 05:44:11.224374] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:43:57.779 [2024-12-15 05:44:11.224418] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:43:57.779 [2024-12-15 05:44:11.302663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:43:57.779 [2024-12-15 05:44:11.326630] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:43:57.779 [2024-12-15 05:44:11.326667] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:43:57.779 [2024-12-15 05:44:11.326674] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:43:57.779 [2024-12-15 05:44:11.326680] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:43:57.779 [2024-12-15 05:44:11.326687] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:43:57.779 [2024-12-15 05:44:11.327983] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:43:57.779 [2024-12-15 05:44:11.328097] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:43:57.779 [2024-12-15 05:44:11.328134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:43:57.779 [2024-12-15 05:44:11.328135] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@511 -- # timing_exit start_nvmf_tgt 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- nvmf/common.sh@512 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:43:57.779 05:44:11 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:43:58.037 05:44:11 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:43:58.037 05:44:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:43:58.037 05:44:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:43:58.037 05:44:11 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:43:58.037 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:43:58.037 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:43:58.037 05:44:11 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:43:58.037 ************************************ 00:43:58.037 START TEST spdk_target_abort 00:43:58.037 ************************************ 00:43:58.037 05:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:43:58.037 05:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:43:58.037 05:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:43:58.037 05:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:43:58.037 05:44:11 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:01.326 spdk_targetn1 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:01.326 [2024-12-15 05:44:14.332119] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:01.326 [2024-12-15 05:44:14.384401] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:01.326 05:44:14 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:03.870 Initializing NVMe Controllers 00:44:03.870 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:03.870 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:03.870 Initialization complete. Launching workers. 00:44:03.870 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 15775, failed: 0 00:44:03.870 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1379, failed to submit 14396 00:44:03.870 success 701, unsuccessful 678, failed 0 00:44:03.870 05:44:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:03.870 05:44:17 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:07.158 Initializing NVMe Controllers 00:44:07.158 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:07.158 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:07.158 Initialization complete. Launching workers. 00:44:07.158 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8638, failed: 0 00:44:07.158 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1253, failed to submit 7385 00:44:07.158 success 329, unsuccessful 924, failed 0 00:44:07.158 05:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:07.158 05:44:20 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:10.446 Initializing NVMe Controllers 00:44:10.446 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:44:10.446 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:10.446 Initialization complete. Launching workers. 00:44:10.446 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 38603, failed: 0 00:44:10.446 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2838, failed to submit 35765 00:44:10.446 success 579, unsuccessful 2259, failed 0 00:44:10.446 05:44:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:44:10.446 05:44:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:10.446 05:44:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:10.446 05:44:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:10.447 05:44:23 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:44:10.447 05:44:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:10.447 05:44:23 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 651120 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 651120 ']' 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 651120 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 651120 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 651120' 00:44:11.823 killing process with pid 651120 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 651120 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 651120 00:44:11.823 00:44:11.823 real 0m13.928s 00:44:11.823 user 0m53.321s 00:44:11.823 sys 0m2.284s 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:11.823 ************************************ 00:44:11.823 END TEST spdk_target_abort 00:44:11.823 ************************************ 00:44:11.823 05:44:25 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:44:11.823 05:44:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:11.823 05:44:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:11.823 05:44:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:11.823 ************************************ 00:44:11.823 START TEST kernel_target_abort 00:44:11.823 ************************************ 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@769 -- # local ip 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # ip_candidates=() 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@770 -- # local -A ip_candidates 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@772 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:44:11.823 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@773 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z tcp ]] 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@775 -- # [[ -z NVMF_INITIATOR_IP ]] 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@776 -- # ip=NVMF_INITIATOR_IP 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@778 -- # [[ -z 10.0.0.1 ]] 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@783 -- # echo 10.0.0.1 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@662 -- # nvmet=/sys/kernel/config/nvmet 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@663 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@664 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # local block nvme 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # [[ ! -e /sys/module/nvmet ]] 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@670 -- # modprobe nvmet 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # [[ -e /sys/kernel/config/nvmet ]] 00:44:12.082 05:44:25 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@675 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:14.618 Waiting for block devices as requested 00:44:14.619 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:14.878 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:14.878 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:14.878 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:15.137 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:15.137 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:15.137 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:15.137 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:15.397 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:15.397 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:15.397 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:15.656 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:15.656 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:15.656 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:15.915 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:15.915 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:15.915 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:15.915 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@678 -- # for block in /sys/block/nvme* 00:44:15.915 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@679 -- # [[ -e /sys/block/nvme0n1 ]] 00:44:15.915 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # is_block_zoned nvme0n1 00:44:16.174 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:44:16.174 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:44:16.174 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # block_in_use nvme0n1 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:44:16.175 No valid GPT data, bailing 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@681 -- # nvme=/dev/nvme0n1 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -b /dev/nvme0n1 ]] 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@687 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # echo 1 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@696 -- # echo /dev/nvme0n1 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@697 -- # echo 1 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@699 -- # echo 10.0.0.1 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@700 -- # echo tcp 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@701 -- # echo 4420 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@702 -- # echo ipv4 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@705 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@708 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 --hostid=80b56b8f-cbc7-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:44:16.175 00:44:16.175 Discovery Log Number of Records 2, Generation counter 2 00:44:16.175 =====Discovery Log Entry 0====== 00:44:16.175 trtype: tcp 00:44:16.175 adrfam: ipv4 00:44:16.175 subtype: current discovery subsystem 00:44:16.175 treq: not specified, sq flow control disable supported 00:44:16.175 portid: 1 00:44:16.175 trsvcid: 4420 00:44:16.175 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:44:16.175 traddr: 10.0.0.1 00:44:16.175 eflags: none 00:44:16.175 sectype: none 00:44:16.175 =====Discovery Log Entry 1====== 00:44:16.175 trtype: tcp 00:44:16.175 adrfam: ipv4 00:44:16.175 subtype: nvme subsystem 00:44:16.175 treq: not specified, sq flow control disable supported 00:44:16.175 portid: 1 00:44:16.175 trsvcid: 4420 00:44:16.175 subnqn: nqn.2016-06.io.spdk:testnqn 00:44:16.175 traddr: 10.0.0.1 00:44:16.175 eflags: none 00:44:16.175 sectype: none 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:16.175 05:44:29 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:19.462 Initializing NVMe Controllers 00:44:19.462 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:19.462 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:19.462 Initialization complete. Launching workers. 00:44:19.462 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 94112, failed: 0 00:44:19.462 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 94112, failed to submit 0 00:44:19.462 success 0, unsuccessful 94112, failed 0 00:44:19.462 05:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:19.462 05:44:32 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:22.751 Initializing NVMe Controllers 00:44:22.751 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:22.751 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:22.751 Initialization complete. Launching workers. 00:44:22.751 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 150341, failed: 0 00:44:22.751 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37710, failed to submit 112631 00:44:22.751 success 0, unsuccessful 37710, failed 0 00:44:22.751 05:44:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:44:22.751 05:44:35 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:44:26.039 Initializing NVMe Controllers 00:44:26.039 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:44:26.039 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:44:26.039 Initialization complete. Launching workers. 00:44:26.039 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 141755, failed: 0 00:44:26.039 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35490, failed to submit 106265 00:44:26.039 success 0, unsuccessful 35490, failed 0 00:44:26.039 05:44:39 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:44:26.039 05:44:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@712 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:44:26.039 05:44:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@714 -- # echo 0 00:44:26.039 05:44:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@716 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:26.039 05:44:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@717 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:44:26.039 05:44:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@718 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:44:26.039 05:44:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@719 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:44:26.039 05:44:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@721 -- # modules=(/sys/module/nvmet/holders/*) 00:44:26.039 05:44:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@723 -- # modprobe -r nvmet_tcp nvmet 00:44:26.039 05:44:39 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@726 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:44:28.574 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:44:28.574 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:44:29.141 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:44:29.400 00:44:29.400 real 0m17.420s 00:44:29.400 user 0m9.114s 00:44:29.400 sys 0m5.027s 00:44:29.400 05:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:29.400 05:44:42 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:44:29.400 ************************************ 00:44:29.400 END TEST kernel_target_abort 00:44:29.400 ************************************ 00:44:29.400 05:44:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:44:29.400 05:44:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:44:29.400 05:44:42 nvmf_abort_qd_sizes -- nvmf/common.sh@516 -- # nvmfcleanup 00:44:29.400 05:44:42 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # sync 00:44:29.400 05:44:42 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # '[' tcp == tcp ']' 00:44:29.400 05:44:42 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set +e 00:44:29.400 05:44:42 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # for i in {1..20} 00:44:29.400 05:44:42 nvmf_abort_qd_sizes -- nvmf/common.sh@126 -- # modprobe -v -r nvme-tcp 00:44:29.400 rmmod nvme_tcp 00:44:29.400 rmmod nvme_fabrics 00:44:29.400 rmmod nvme_keyring 00:44:29.400 05:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@127 -- # modprobe -v -r nvme-fabrics 00:44:29.400 05:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@128 -- # set -e 00:44:29.400 05:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # return 0 00:44:29.401 05:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@517 -- # '[' -n 651120 ']' 00:44:29.401 05:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@518 -- # killprocess 651120 00:44:29.401 05:44:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 651120 ']' 00:44:29.401 05:44:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 651120 00:44:29.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (651120) - No such process 00:44:29.401 05:44:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 651120 is not found' 00:44:29.401 Process with pid 651120 is not found 00:44:29.401 05:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@520 -- # '[' iso == iso ']' 00:44:29.401 05:44:43 nvmf_abort_qd_sizes -- nvmf/common.sh@521 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:44:32.688 Waiting for block devices as requested 00:44:32.688 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:44:32.688 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:32.688 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:32.688 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:32.688 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:32.688 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:32.688 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:32.688 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:32.948 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:32.948 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:44:32.948 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:44:32.948 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:44:33.207 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:44:33.207 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:44:33.207 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:44:33.207 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:44:33.466 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:44:33.466 05:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@523 -- # [[ tcp == \t\c\p ]] 00:44:33.466 05:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@524 -- # nvmf_tcp_fini 00:44:33.466 05:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # iptr 00:44:33.466 05:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-save 00:44:33.466 05:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # grep -v SPDK_NVMF 00:44:33.466 05:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@791 -- # iptables-restore 00:44:33.466 05:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s\_\s\p\d\k ]] 00:44:33.466 05:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # remove_spdk_ns 00:44:33.466 05:44:47 nvmf_abort_qd_sizes -- nvmf/common.sh@656 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:44:33.466 05:44:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:44:33.466 05:44:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:44:36.001 05:44:49 nvmf_abort_qd_sizes -- nvmf/common.sh@303 -- # ip -4 addr flush cvl_0_1 00:44:36.001 00:44:36.001 real 0m47.894s 00:44:36.001 user 1m6.704s 00:44:36.001 sys 0m16.005s 00:44:36.001 05:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:36.001 05:44:49 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:44:36.001 ************************************ 00:44:36.001 END TEST nvmf_abort_qd_sizes 00:44:36.001 ************************************ 00:44:36.001 05:44:49 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:36.001 05:44:49 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:44:36.001 05:44:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:36.001 05:44:49 -- common/autotest_common.sh@10 -- # set +x 00:44:36.001 ************************************ 00:44:36.001 START TEST keyring_file 00:44:36.001 ************************************ 00:44:36.001 05:44:49 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:44:36.001 * Looking for test storage... 00:44:36.001 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:36.001 05:44:49 keyring_file -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:36.001 05:44:49 keyring_file -- common/autotest_common.sh@1711 -- # lcov --version 00:44:36.001 05:44:49 keyring_file -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:36.001 05:44:49 keyring_file -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@345 -- # : 1 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@353 -- # local d=1 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@355 -- # echo 1 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@353 -- # local d=2 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@355 -- # echo 2 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:36.001 05:44:49 keyring_file -- scripts/common.sh@368 -- # return 0 00:44:36.001 05:44:49 keyring_file -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:36.001 05:44:49 keyring_file -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:36.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:36.001 --rc genhtml_branch_coverage=1 00:44:36.001 --rc genhtml_function_coverage=1 00:44:36.001 --rc genhtml_legend=1 00:44:36.001 --rc geninfo_all_blocks=1 00:44:36.001 --rc geninfo_unexecuted_blocks=1 00:44:36.001 00:44:36.001 ' 00:44:36.001 05:44:49 keyring_file -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:36.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:36.001 --rc genhtml_branch_coverage=1 00:44:36.001 --rc genhtml_function_coverage=1 00:44:36.001 --rc genhtml_legend=1 00:44:36.001 --rc geninfo_all_blocks=1 00:44:36.001 --rc geninfo_unexecuted_blocks=1 00:44:36.001 00:44:36.001 ' 00:44:36.001 05:44:49 keyring_file -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:36.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:36.001 --rc genhtml_branch_coverage=1 00:44:36.001 --rc genhtml_function_coverage=1 00:44:36.001 --rc genhtml_legend=1 00:44:36.001 --rc geninfo_all_blocks=1 00:44:36.001 --rc geninfo_unexecuted_blocks=1 00:44:36.001 00:44:36.001 ' 00:44:36.001 05:44:49 keyring_file -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:36.001 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:36.001 --rc genhtml_branch_coverage=1 00:44:36.001 --rc genhtml_function_coverage=1 00:44:36.001 --rc genhtml_legend=1 00:44:36.001 --rc geninfo_all_blocks=1 00:44:36.001 --rc geninfo_unexecuted_blocks=1 00:44:36.001 00:44:36.001 ' 00:44:36.001 05:44:49 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:36.001 05:44:49 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:36.001 05:44:49 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:44:36.001 05:44:49 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:36.001 05:44:49 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:36.001 05:44:49 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:36.001 05:44:49 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:36.001 05:44:49 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:36.001 05:44:49 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:36.001 05:44:49 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:36.001 05:44:49 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:36.001 05:44:49 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:36.001 05:44:49 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:36.001 05:44:49 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:36.002 05:44:49 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:44:36.002 05:44:49 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:36.002 05:44:49 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:36.002 05:44:49 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:36.002 05:44:49 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:36.002 05:44:49 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:36.002 05:44:49 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:36.002 05:44:49 keyring_file -- paths/export.sh@5 -- # export PATH 00:44:36.002 05:44:49 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@51 -- # : 0 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:36.002 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:36.002 05:44:49 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:36.002 05:44:49 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:36.002 05:44:49 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:44:36.002 05:44:49 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:44:36.002 05:44:49 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:44:36.002 05:44:49 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.XOeuUSKTeq 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.XOeuUSKTeq 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.XOeuUSKTeq 00:44:36.002 05:44:49 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.XOeuUSKTeq 00:44:36.002 05:44:49 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@17 -- # name=key1 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.M5ZbnPWA2U 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:36.002 05:44:49 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.M5ZbnPWA2U 00:44:36.002 05:44:49 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.M5ZbnPWA2U 00:44:36.002 05:44:49 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.M5ZbnPWA2U 00:44:36.002 05:44:49 keyring_file -- keyring/file.sh@30 -- # tgtpid=659691 00:44:36.002 05:44:49 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:36.002 05:44:49 keyring_file -- keyring/file.sh@32 -- # waitforlisten 659691 00:44:36.002 05:44:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 659691 ']' 00:44:36.002 05:44:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:36.002 05:44:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:36.002 05:44:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:36.002 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:36.002 05:44:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:36.002 05:44:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:36.002 [2024-12-15 05:44:49.582889] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:36.002 [2024-12-15 05:44:49.582939] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659691 ] 00:44:36.002 [2024-12-15 05:44:49.653562] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:36.002 [2024-12-15 05:44:49.675960] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:36.261 05:44:49 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:36.261 05:44:49 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:36.261 05:44:49 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:44:36.261 05:44:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.261 05:44:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:36.261 [2024-12-15 05:44:49.884762] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:36.261 null0 00:44:36.261 [2024-12-15 05:44:49.916824] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:36.261 [2024-12-15 05:44:49.917100] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:36.261 05:44:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:36.261 05:44:49 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:36.261 05:44:49 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:36.262 05:44:49 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:36.262 05:44:49 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:44:36.262 05:44:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:36.262 05:44:49 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:44:36.262 05:44:49 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:36.262 05:44:49 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:44:36.262 05:44:49 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:36.262 05:44:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:36.262 [2024-12-15 05:44:49.944877] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:44:36.520 request: 00:44:36.521 { 00:44:36.521 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:44:36.521 "secure_channel": false, 00:44:36.521 "listen_address": { 00:44:36.521 "trtype": "tcp", 00:44:36.521 "traddr": "127.0.0.1", 00:44:36.521 "trsvcid": "4420" 00:44:36.521 }, 00:44:36.521 "method": "nvmf_subsystem_add_listener", 00:44:36.521 "req_id": 1 00:44:36.521 } 00:44:36.521 Got JSON-RPC error response 00:44:36.521 response: 00:44:36.521 { 00:44:36.521 "code": -32602, 00:44:36.521 "message": "Invalid parameters" 00:44:36.521 } 00:44:36.521 05:44:49 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:44:36.521 05:44:49 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:36.521 05:44:49 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:36.521 05:44:49 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:36.521 05:44:49 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:36.521 05:44:49 keyring_file -- keyring/file.sh@47 -- # bperfpid=659696 00:44:36.521 05:44:49 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:44:36.521 05:44:49 keyring_file -- keyring/file.sh@49 -- # waitforlisten 659696 /var/tmp/bperf.sock 00:44:36.521 05:44:49 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 659696 ']' 00:44:36.521 05:44:49 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:36.521 05:44:49 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:36.521 05:44:49 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:36.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:36.521 05:44:49 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:36.521 05:44:49 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:36.521 [2024-12-15 05:44:49.998413] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:36.521 [2024-12-15 05:44:49.998454] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid659696 ] 00:44:36.521 [2024-12-15 05:44:50.088095] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:36.521 [2024-12-15 05:44:50.110514] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:36.521 05:44:50 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:36.521 05:44:50 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:36.521 05:44:50 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XOeuUSKTeq 00:44:36.780 05:44:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XOeuUSKTeq 00:44:36.780 05:44:50 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.M5ZbnPWA2U 00:44:36.780 05:44:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.M5ZbnPWA2U 00:44:37.038 05:44:50 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:44:37.038 05:44:50 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:44:37.038 05:44:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:37.038 05:44:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:37.038 05:44:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.297 05:44:50 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.XOeuUSKTeq == \/\t\m\p\/\t\m\p\.\X\O\e\u\U\S\K\T\e\q ]] 00:44:37.297 05:44:50 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:44:37.297 05:44:50 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:44:37.297 05:44:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:37.297 05:44:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:37.297 05:44:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.555 05:44:50 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.M5ZbnPWA2U == \/\t\m\p\/\t\m\p\.\M\5\Z\b\n\P\W\A\2\U ]] 00:44:37.555 05:44:50 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:44:37.555 05:44:50 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:37.555 05:44:50 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:37.555 05:44:50 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:37.555 05:44:50 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:37.555 05:44:50 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.555 05:44:51 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:44:37.555 05:44:51 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:44:37.556 05:44:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:37.556 05:44:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:37.556 05:44:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:37.556 05:44:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:37.556 05:44:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:37.814 05:44:51 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:44:37.814 05:44:51 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:37.814 05:44:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:38.073 [2024-12-15 05:44:51.548905] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:38.073 nvme0n1 00:44:38.073 05:44:51 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:44:38.073 05:44:51 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:38.073 05:44:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:38.073 05:44:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:38.073 05:44:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:38.073 05:44:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:38.332 05:44:51 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:44:38.332 05:44:51 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:44:38.332 05:44:51 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:38.332 05:44:51 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:38.332 05:44:51 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:38.332 05:44:51 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:38.332 05:44:51 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:38.590 05:44:52 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:44:38.590 05:44:52 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:38.590 Running I/O for 1 seconds... 00:44:39.526 19267.00 IOPS, 75.26 MiB/s 00:44:39.526 Latency(us) 00:44:39.526 [2024-12-15T04:44:53.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:39.526 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:44:39.526 nvme0n1 : 1.00 19314.57 75.45 0.00 0.00 6615.20 2871.10 12607.88 00:44:39.526 [2024-12-15T04:44:53.213Z] =================================================================================================================== 00:44:39.526 [2024-12-15T04:44:53.213Z] Total : 19314.57 75.45 0.00 0.00 6615.20 2871.10 12607.88 00:44:39.526 { 00:44:39.526 "results": [ 00:44:39.526 { 00:44:39.526 "job": "nvme0n1", 00:44:39.526 "core_mask": "0x2", 00:44:39.526 "workload": "randrw", 00:44:39.526 "percentage": 50, 00:44:39.526 "status": "finished", 00:44:39.526 "queue_depth": 128, 00:44:39.526 "io_size": 4096, 00:44:39.526 "runtime": 1.004216, 00:44:39.526 "iops": 19314.56977383352, 00:44:39.526 "mibps": 75.44753817903718, 00:44:39.526 "io_failed": 0, 00:44:39.526 "io_timeout": 0, 00:44:39.526 "avg_latency_us": 6615.199891582948, 00:44:39.526 "min_latency_us": 2871.1009523809525, 00:44:39.526 "max_latency_us": 12607.878095238095 00:44:39.526 } 00:44:39.526 ], 00:44:39.526 "core_count": 1 00:44:39.526 } 00:44:39.526 05:44:53 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:39.526 05:44:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:39.785 05:44:53 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:44:39.785 05:44:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:39.785 05:44:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:39.785 05:44:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:39.785 05:44:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:39.785 05:44:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:40.044 05:44:53 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:44:40.044 05:44:53 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:44:40.044 05:44:53 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:40.044 05:44:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:40.044 05:44:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:40.044 05:44:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:40.044 05:44:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:40.303 05:44:53 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:44:40.303 05:44:53 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:40.303 05:44:53 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:40.303 05:44:53 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:40.303 05:44:53 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:40.303 05:44:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:40.303 05:44:53 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:40.303 05:44:53 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:40.303 05:44:53 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:40.303 05:44:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:44:40.303 [2024-12-15 05:44:53.928695] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:40.303 [2024-12-15 05:44:53.928936] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149c950 (107): Transport endpoint is not connected 00:44:40.303 [2024-12-15 05:44:53.929931] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x149c950 (9): Bad file descriptor 00:44:40.303 [2024-12-15 05:44:53.930932] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:40.303 [2024-12-15 05:44:53.930943] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:40.303 [2024-12-15 05:44:53.930950] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:40.303 [2024-12-15 05:44:53.930959] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:40.303 request: 00:44:40.303 { 00:44:40.303 "name": "nvme0", 00:44:40.303 "trtype": "tcp", 00:44:40.303 "traddr": "127.0.0.1", 00:44:40.303 "adrfam": "ipv4", 00:44:40.303 "trsvcid": "4420", 00:44:40.303 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:40.303 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:40.303 "prchk_reftag": false, 00:44:40.303 "prchk_guard": false, 00:44:40.303 "hdgst": false, 00:44:40.303 "ddgst": false, 00:44:40.303 "psk": "key1", 00:44:40.303 "allow_unrecognized_csi": false, 00:44:40.303 "method": "bdev_nvme_attach_controller", 00:44:40.303 "req_id": 1 00:44:40.303 } 00:44:40.303 Got JSON-RPC error response 00:44:40.303 response: 00:44:40.303 { 00:44:40.303 "code": -5, 00:44:40.303 "message": "Input/output error" 00:44:40.303 } 00:44:40.303 05:44:53 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:40.303 05:44:53 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:40.303 05:44:53 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:40.303 05:44:53 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:40.303 05:44:53 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:44:40.303 05:44:53 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:40.303 05:44:53 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:40.303 05:44:53 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:40.303 05:44:53 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:40.303 05:44:53 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:40.562 05:44:54 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:44:40.562 05:44:54 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:44:40.562 05:44:54 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:40.562 05:44:54 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:40.562 05:44:54 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:40.562 05:44:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:40.562 05:44:54 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:40.820 05:44:54 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:44:40.820 05:44:54 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:44:40.820 05:44:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:41.079 05:44:54 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:44:41.079 05:44:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:44:41.079 05:44:54 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:44:41.079 05:44:54 keyring_file -- keyring/file.sh@78 -- # jq length 00:44:41.079 05:44:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:41.338 05:44:54 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:44:41.338 05:44:54 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.XOeuUSKTeq 00:44:41.338 05:44:54 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.XOeuUSKTeq 00:44:41.338 05:44:54 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:41.338 05:44:54 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.XOeuUSKTeq 00:44:41.338 05:44:54 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:41.338 05:44:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:41.338 05:44:54 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:41.338 05:44:54 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:41.338 05:44:54 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XOeuUSKTeq 00:44:41.338 05:44:54 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XOeuUSKTeq 00:44:41.597 [2024-12-15 05:44:55.058969] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.XOeuUSKTeq': 0100660 00:44:41.597 [2024-12-15 05:44:55.058998] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:44:41.597 request: 00:44:41.597 { 00:44:41.597 "name": "key0", 00:44:41.597 "path": "/tmp/tmp.XOeuUSKTeq", 00:44:41.597 "method": "keyring_file_add_key", 00:44:41.597 "req_id": 1 00:44:41.597 } 00:44:41.597 Got JSON-RPC error response 00:44:41.597 response: 00:44:41.597 { 00:44:41.597 "code": -1, 00:44:41.597 "message": "Operation not permitted" 00:44:41.597 } 00:44:41.598 05:44:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:41.598 05:44:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:41.598 05:44:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:41.598 05:44:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:41.598 05:44:55 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.XOeuUSKTeq 00:44:41.598 05:44:55 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.XOeuUSKTeq 00:44:41.598 05:44:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.XOeuUSKTeq 00:44:41.598 05:44:55 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.XOeuUSKTeq 00:44:41.598 05:44:55 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:44:41.598 05:44:55 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:41.598 05:44:55 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:41.598 05:44:55 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:41.598 05:44:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:41.598 05:44:55 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:41.855 05:44:55 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:44:41.856 05:44:55 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:41.856 05:44:55 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:44:41.856 05:44:55 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:41.856 05:44:55 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:41.856 05:44:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:41.856 05:44:55 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:41.856 05:44:55 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:41.856 05:44:55 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:41.856 05:44:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:42.115 [2024-12-15 05:44:55.636496] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.XOeuUSKTeq': No such file or directory 00:44:42.115 [2024-12-15 05:44:55.636516] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:44:42.115 [2024-12-15 05:44:55.636532] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:44:42.115 [2024-12-15 05:44:55.636538] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:44:42.115 [2024-12-15 05:44:55.636545] nvme.c: 842:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:44:42.115 [2024-12-15 05:44:55.636552] bdev_nvme.c:6801:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:44:42.115 request: 00:44:42.115 { 00:44:42.115 "name": "nvme0", 00:44:42.115 "trtype": "tcp", 00:44:42.115 "traddr": "127.0.0.1", 00:44:42.115 "adrfam": "ipv4", 00:44:42.115 "trsvcid": "4420", 00:44:42.115 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:42.115 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:42.115 "prchk_reftag": false, 00:44:42.115 "prchk_guard": false, 00:44:42.115 "hdgst": false, 00:44:42.115 "ddgst": false, 00:44:42.115 "psk": "key0", 00:44:42.115 "allow_unrecognized_csi": false, 00:44:42.115 "method": "bdev_nvme_attach_controller", 00:44:42.115 "req_id": 1 00:44:42.115 } 00:44:42.115 Got JSON-RPC error response 00:44:42.115 response: 00:44:42.115 { 00:44:42.115 "code": -19, 00:44:42.115 "message": "No such device" 00:44:42.115 } 00:44:42.115 05:44:55 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:44:42.115 05:44:55 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:42.115 05:44:55 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:42.115 05:44:55 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:42.115 05:44:55 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:44:42.115 05:44:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:42.374 05:44:55 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:44:42.374 05:44:55 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:44:42.374 05:44:55 keyring_file -- keyring/common.sh@17 -- # name=key0 00:44:42.374 05:44:55 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:42.374 05:44:55 keyring_file -- keyring/common.sh@17 -- # digest=0 00:44:42.374 05:44:55 keyring_file -- keyring/common.sh@18 -- # mktemp 00:44:42.374 05:44:55 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.8B3N2XKgVP 00:44:42.374 05:44:55 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:42.374 05:44:55 keyring_file -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:42.374 05:44:55 keyring_file -- nvmf/common.sh@730 -- # local prefix key digest 00:44:42.374 05:44:55 keyring_file -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:42.374 05:44:55 keyring_file -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:42.374 05:44:55 keyring_file -- nvmf/common.sh@732 -- # digest=0 00:44:42.374 05:44:55 keyring_file -- nvmf/common.sh@733 -- # python - 00:44:42.374 05:44:55 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.8B3N2XKgVP 00:44:42.374 05:44:55 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.8B3N2XKgVP 00:44:42.374 05:44:55 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.8B3N2XKgVP 00:44:42.374 05:44:55 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8B3N2XKgVP 00:44:42.374 05:44:55 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8B3N2XKgVP 00:44:42.633 05:44:56 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:42.633 05:44:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:42.891 nvme0n1 00:44:42.892 05:44:56 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:44:42.892 05:44:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:42.892 05:44:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:42.892 05:44:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:42.892 05:44:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:42.892 05:44:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:42.892 05:44:56 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:44:42.892 05:44:56 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:44:42.892 05:44:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:44:43.150 05:44:56 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:44:43.150 05:44:56 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:44:43.150 05:44:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:43.150 05:44:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:43.150 05:44:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.409 05:44:56 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:44:43.409 05:44:56 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:44:43.409 05:44:56 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:43.409 05:44:56 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:43.409 05:44:56 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:43.409 05:44:56 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:43.409 05:44:56 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.667 05:44:57 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:44:43.667 05:44:57 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:43.667 05:44:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:43.926 05:44:57 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:44:43.926 05:44:57 keyring_file -- keyring/file.sh@105 -- # jq length 00:44:43.926 05:44:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:43.926 05:44:57 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:44:43.926 05:44:57 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.8B3N2XKgVP 00:44:43.926 05:44:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.8B3N2XKgVP 00:44:44.185 05:44:57 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.M5ZbnPWA2U 00:44:44.185 05:44:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.M5ZbnPWA2U 00:44:44.444 05:44:57 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:44.444 05:44:57 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:44:44.703 nvme0n1 00:44:44.703 05:44:58 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:44:44.703 05:44:58 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:44:44.963 05:44:58 keyring_file -- keyring/file.sh@113 -- # config='{ 00:44:44.963 "subsystems": [ 00:44:44.963 { 00:44:44.963 "subsystem": "keyring", 00:44:44.963 "config": [ 00:44:44.963 { 00:44:44.963 "method": "keyring_file_add_key", 00:44:44.963 "params": { 00:44:44.963 "name": "key0", 00:44:44.963 "path": "/tmp/tmp.8B3N2XKgVP" 00:44:44.963 } 00:44:44.963 }, 00:44:44.963 { 00:44:44.963 "method": "keyring_file_add_key", 00:44:44.963 "params": { 00:44:44.963 "name": "key1", 00:44:44.963 "path": "/tmp/tmp.M5ZbnPWA2U" 00:44:44.963 } 00:44:44.963 } 00:44:44.963 ] 00:44:44.963 }, 00:44:44.963 { 00:44:44.963 "subsystem": "iobuf", 00:44:44.963 "config": [ 00:44:44.963 { 00:44:44.963 "method": "iobuf_set_options", 00:44:44.963 "params": { 00:44:44.963 "small_pool_count": 8192, 00:44:44.963 "large_pool_count": 1024, 00:44:44.963 "small_bufsize": 8192, 00:44:44.963 "large_bufsize": 135168, 00:44:44.963 "enable_numa": false 00:44:44.963 } 00:44:44.963 } 00:44:44.963 ] 00:44:44.963 }, 00:44:44.963 { 00:44:44.963 "subsystem": "sock", 00:44:44.963 "config": [ 00:44:44.963 { 00:44:44.963 "method": "sock_set_default_impl", 00:44:44.963 "params": { 00:44:44.963 "impl_name": "posix" 00:44:44.963 } 00:44:44.963 }, 00:44:44.963 { 00:44:44.963 "method": "sock_impl_set_options", 00:44:44.963 "params": { 00:44:44.963 "impl_name": "ssl", 00:44:44.963 "recv_buf_size": 4096, 00:44:44.963 "send_buf_size": 4096, 00:44:44.963 "enable_recv_pipe": true, 00:44:44.963 "enable_quickack": false, 00:44:44.963 "enable_placement_id": 0, 00:44:44.963 "enable_zerocopy_send_server": true, 00:44:44.963 "enable_zerocopy_send_client": false, 00:44:44.963 "zerocopy_threshold": 0, 00:44:44.963 "tls_version": 0, 00:44:44.963 "enable_ktls": false 00:44:44.963 } 00:44:44.963 }, 00:44:44.963 { 00:44:44.963 "method": "sock_impl_set_options", 00:44:44.963 "params": { 00:44:44.963 "impl_name": "posix", 00:44:44.963 "recv_buf_size": 2097152, 00:44:44.963 "send_buf_size": 2097152, 00:44:44.963 "enable_recv_pipe": true, 00:44:44.963 "enable_quickack": false, 00:44:44.963 "enable_placement_id": 0, 00:44:44.963 "enable_zerocopy_send_server": true, 00:44:44.963 "enable_zerocopy_send_client": false, 00:44:44.963 "zerocopy_threshold": 0, 00:44:44.963 "tls_version": 0, 00:44:44.963 "enable_ktls": false 00:44:44.963 } 00:44:44.963 } 00:44:44.963 ] 00:44:44.963 }, 00:44:44.963 { 00:44:44.963 "subsystem": "vmd", 00:44:44.963 "config": [] 00:44:44.963 }, 00:44:44.963 { 00:44:44.963 "subsystem": "accel", 00:44:44.963 "config": [ 00:44:44.963 { 00:44:44.963 "method": "accel_set_options", 00:44:44.963 "params": { 00:44:44.963 "small_cache_size": 128, 00:44:44.963 "large_cache_size": 16, 00:44:44.963 "task_count": 2048, 00:44:44.963 "sequence_count": 2048, 00:44:44.963 "buf_count": 2048 00:44:44.963 } 00:44:44.963 } 00:44:44.963 ] 00:44:44.963 }, 00:44:44.963 { 00:44:44.963 "subsystem": "bdev", 00:44:44.963 "config": [ 00:44:44.963 { 00:44:44.963 "method": "bdev_set_options", 00:44:44.963 "params": { 00:44:44.963 "bdev_io_pool_size": 65535, 00:44:44.963 "bdev_io_cache_size": 256, 00:44:44.963 "bdev_auto_examine": true, 00:44:44.963 "iobuf_small_cache_size": 128, 00:44:44.963 "iobuf_large_cache_size": 16 00:44:44.963 } 00:44:44.963 }, 00:44:44.963 { 00:44:44.963 "method": "bdev_raid_set_options", 00:44:44.963 "params": { 00:44:44.963 "process_window_size_kb": 1024, 00:44:44.963 "process_max_bandwidth_mb_sec": 0 00:44:44.963 } 00:44:44.963 }, 00:44:44.963 { 00:44:44.963 "method": "bdev_iscsi_set_options", 00:44:44.963 "params": { 00:44:44.963 "timeout_sec": 30 00:44:44.963 } 00:44:44.963 }, 00:44:44.963 { 00:44:44.963 "method": "bdev_nvme_set_options", 00:44:44.963 "params": { 00:44:44.963 "action_on_timeout": "none", 00:44:44.963 "timeout_us": 0, 00:44:44.963 "timeout_admin_us": 0, 00:44:44.963 "keep_alive_timeout_ms": 10000, 00:44:44.963 "arbitration_burst": 0, 00:44:44.963 "low_priority_weight": 0, 00:44:44.963 "medium_priority_weight": 0, 00:44:44.963 "high_priority_weight": 0, 00:44:44.963 "nvme_adminq_poll_period_us": 10000, 00:44:44.963 "nvme_ioq_poll_period_us": 0, 00:44:44.963 "io_queue_requests": 512, 00:44:44.963 "delay_cmd_submit": true, 00:44:44.963 "transport_retry_count": 4, 00:44:44.963 "bdev_retry_count": 3, 00:44:44.963 "transport_ack_timeout": 0, 00:44:44.963 "ctrlr_loss_timeout_sec": 0, 00:44:44.963 "reconnect_delay_sec": 0, 00:44:44.963 "fast_io_fail_timeout_sec": 0, 00:44:44.963 "disable_auto_failback": false, 00:44:44.963 "generate_uuids": false, 00:44:44.963 "transport_tos": 0, 00:44:44.963 "nvme_error_stat": false, 00:44:44.963 "rdma_srq_size": 0, 00:44:44.963 "io_path_stat": false, 00:44:44.963 "allow_accel_sequence": false, 00:44:44.963 "rdma_max_cq_size": 0, 00:44:44.963 "rdma_cm_event_timeout_ms": 0, 00:44:44.963 "dhchap_digests": [ 00:44:44.963 "sha256", 00:44:44.963 "sha384", 00:44:44.963 "sha512" 00:44:44.963 ], 00:44:44.963 "dhchap_dhgroups": [ 00:44:44.963 "null", 00:44:44.963 "ffdhe2048", 00:44:44.963 "ffdhe3072", 00:44:44.963 "ffdhe4096", 00:44:44.963 "ffdhe6144", 00:44:44.963 "ffdhe8192" 00:44:44.963 ], 00:44:44.963 "rdma_umr_per_io": false 00:44:44.963 } 00:44:44.963 }, 00:44:44.963 { 00:44:44.963 "method": "bdev_nvme_attach_controller", 00:44:44.963 "params": { 00:44:44.963 "name": "nvme0", 00:44:44.963 "trtype": "TCP", 00:44:44.963 "adrfam": "IPv4", 00:44:44.964 "traddr": "127.0.0.1", 00:44:44.964 "trsvcid": "4420", 00:44:44.964 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:44.964 "prchk_reftag": false, 00:44:44.964 "prchk_guard": false, 00:44:44.964 "ctrlr_loss_timeout_sec": 0, 00:44:44.964 "reconnect_delay_sec": 0, 00:44:44.964 "fast_io_fail_timeout_sec": 0, 00:44:44.964 "psk": "key0", 00:44:44.964 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:44.964 "hdgst": false, 00:44:44.964 "ddgst": false, 00:44:44.964 "multipath": "multipath" 00:44:44.964 } 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "method": "bdev_nvme_set_hotplug", 00:44:44.964 "params": { 00:44:44.964 "period_us": 100000, 00:44:44.964 "enable": false 00:44:44.964 } 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "method": "bdev_wait_for_examine" 00:44:44.964 } 00:44:44.964 ] 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "subsystem": "nbd", 00:44:44.964 "config": [] 00:44:44.964 } 00:44:44.964 ] 00:44:44.964 }' 00:44:44.964 05:44:58 keyring_file -- keyring/file.sh@115 -- # killprocess 659696 00:44:44.964 05:44:58 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 659696 ']' 00:44:44.964 05:44:58 keyring_file -- common/autotest_common.sh@958 -- # kill -0 659696 00:44:44.964 05:44:58 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:44.964 05:44:58 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:44.964 05:44:58 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659696 00:44:44.964 05:44:58 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:44.964 05:44:58 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:44.964 05:44:58 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659696' 00:44:44.964 killing process with pid 659696 00:44:44.964 05:44:58 keyring_file -- common/autotest_common.sh@973 -- # kill 659696 00:44:44.964 Received shutdown signal, test time was about 1.000000 seconds 00:44:44.964 00:44:44.964 Latency(us) 00:44:44.964 [2024-12-15T04:44:58.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:44.964 [2024-12-15T04:44:58.651Z] =================================================================================================================== 00:44:44.964 [2024-12-15T04:44:58.651Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:44.964 05:44:58 keyring_file -- common/autotest_common.sh@978 -- # wait 659696 00:44:44.964 05:44:58 keyring_file -- keyring/file.sh@118 -- # bperfpid=661172 00:44:44.964 05:44:58 keyring_file -- keyring/file.sh@120 -- # waitforlisten 661172 /var/tmp/bperf.sock 00:44:44.964 05:44:58 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 661172 ']' 00:44:44.964 05:44:58 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:44:44.964 "subsystems": [ 00:44:44.964 { 00:44:44.964 "subsystem": "keyring", 00:44:44.964 "config": [ 00:44:44.964 { 00:44:44.964 "method": "keyring_file_add_key", 00:44:44.964 "params": { 00:44:44.964 "name": "key0", 00:44:44.964 "path": "/tmp/tmp.8B3N2XKgVP" 00:44:44.964 } 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "method": "keyring_file_add_key", 00:44:44.964 "params": { 00:44:44.964 "name": "key1", 00:44:44.964 "path": "/tmp/tmp.M5ZbnPWA2U" 00:44:44.964 } 00:44:44.964 } 00:44:44.964 ] 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "subsystem": "iobuf", 00:44:44.964 "config": [ 00:44:44.964 { 00:44:44.964 "method": "iobuf_set_options", 00:44:44.964 "params": { 00:44:44.964 "small_pool_count": 8192, 00:44:44.964 "large_pool_count": 1024, 00:44:44.964 "small_bufsize": 8192, 00:44:44.964 "large_bufsize": 135168, 00:44:44.964 "enable_numa": false 00:44:44.964 } 00:44:44.964 } 00:44:44.964 ] 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "subsystem": "sock", 00:44:44.964 "config": [ 00:44:44.964 { 00:44:44.964 "method": "sock_set_default_impl", 00:44:44.964 "params": { 00:44:44.964 "impl_name": "posix" 00:44:44.964 } 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "method": "sock_impl_set_options", 00:44:44.964 "params": { 00:44:44.964 "impl_name": "ssl", 00:44:44.964 "recv_buf_size": 4096, 00:44:44.964 "send_buf_size": 4096, 00:44:44.964 "enable_recv_pipe": true, 00:44:44.964 "enable_quickack": false, 00:44:44.964 "enable_placement_id": 0, 00:44:44.964 "enable_zerocopy_send_server": true, 00:44:44.964 "enable_zerocopy_send_client": false, 00:44:44.964 "zerocopy_threshold": 0, 00:44:44.964 "tls_version": 0, 00:44:44.964 "enable_ktls": false 00:44:44.964 } 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "method": "sock_impl_set_options", 00:44:44.964 "params": { 00:44:44.964 "impl_name": "posix", 00:44:44.964 "recv_buf_size": 2097152, 00:44:44.964 "send_buf_size": 2097152, 00:44:44.964 "enable_recv_pipe": true, 00:44:44.964 "enable_quickack": false, 00:44:44.964 "enable_placement_id": 0, 00:44:44.964 "enable_zerocopy_send_server": true, 00:44:44.964 "enable_zerocopy_send_client": false, 00:44:44.964 "zerocopy_threshold": 0, 00:44:44.964 "tls_version": 0, 00:44:44.964 "enable_ktls": false 00:44:44.964 } 00:44:44.964 } 00:44:44.964 ] 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "subsystem": "vmd", 00:44:44.964 "config": [] 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "subsystem": "accel", 00:44:44.964 "config": [ 00:44:44.964 { 00:44:44.964 "method": "accel_set_options", 00:44:44.964 "params": { 00:44:44.964 "small_cache_size": 128, 00:44:44.964 "large_cache_size": 16, 00:44:44.964 "task_count": 2048, 00:44:44.964 "sequence_count": 2048, 00:44:44.964 "buf_count": 2048 00:44:44.964 } 00:44:44.964 } 00:44:44.964 ] 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "subsystem": "bdev", 00:44:44.964 "config": [ 00:44:44.964 { 00:44:44.964 "method": "bdev_set_options", 00:44:44.964 "params": { 00:44:44.964 "bdev_io_pool_size": 65535, 00:44:44.964 "bdev_io_cache_size": 256, 00:44:44.964 "bdev_auto_examine": true, 00:44:44.964 "iobuf_small_cache_size": 128, 00:44:44.964 "iobuf_large_cache_size": 16 00:44:44.964 } 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "method": "bdev_raid_set_options", 00:44:44.964 "params": { 00:44:44.964 "process_window_size_kb": 1024, 00:44:44.964 "process_max_bandwidth_mb_sec": 0 00:44:44.964 } 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "method": "bdev_iscsi_set_options", 00:44:44.964 "params": { 00:44:44.964 "timeout_sec": 30 00:44:44.964 } 00:44:44.964 }, 00:44:44.964 { 00:44:44.964 "method": "bdev_nvme_set_options", 00:44:44.964 "params": { 00:44:44.964 "action_on_timeout": "none", 00:44:44.964 "timeout_us": 0, 00:44:44.964 "timeout_admin_us": 0, 00:44:44.964 "keep_alive_timeout_ms": 10000, 00:44:44.964 "arbitration_burst": 0, 00:44:44.964 "low_priority_weight": 0, 00:44:44.964 "medium_priority_weight": 0, 00:44:44.964 "high_priority_weight": 0, 00:44:44.964 "nvme_adminq_poll_period_us": 10000, 00:44:44.964 "nvme_ioq_poll_period_us": 0, 00:44:44.964 "io_queue_requests": 512, 00:44:44.964 "delay_cmd_submit": true, 00:44:44.964 "transport_retry_count": 4, 00:44:44.964 "bdev_retry_count": 3, 00:44:44.964 "transport_ack_timeout": 0, 00:44:44.964 "ctrlr_loss_timeout_sec": 0, 00:44:44.964 "reconnect_delay_sec": 0, 00:44:44.964 "fast_io_fail_timeout_sec": 0, 00:44:44.964 "disable_auto_failback": false, 00:44:44.964 "generate_uuids": false, 00:44:44.964 "transport_tos": 0, 00:44:44.964 "nvme_error_stat": false, 00:44:44.964 "rdma_srq_size": 0, 00:44:44.964 "io_path_stat": false, 00:44:44.964 "allow_accel_sequence": false, 00:44:44.964 "rdma_max_cq_size": 0, 00:44:44.964 "rdma_cm_event_timeout_ms": 0, 00:44:44.964 "dhchap_digests": [ 00:44:44.964 "sha256", 00:44:44.964 "sha384", 00:44:44.964 "sha512" 00:44:44.964 ], 00:44:44.964 "dhchap_dhgroups": [ 00:44:44.964 "null", 00:44:44.964 "ffdhe2048", 00:44:44.964 "ffdhe3072", 00:44:44.964 "ffdhe4096", 00:44:44.964 "ffdhe6144", 00:44:44.964 "ffdhe8192" 00:44:44.965 ], 00:44:44.965 "rdma_umr_per_io": false 00:44:44.965 } 00:44:44.965 }, 00:44:44.965 { 00:44:44.965 "method": "bdev_nvme_attach_controller", 00:44:44.965 "params": { 00:44:44.965 "name": "nvme0", 00:44:44.965 "trtype": "TCP", 00:44:44.965 "adrfam": "IPv4", 00:44:44.965 "traddr": "127.0.0.1", 00:44:44.965 "trsvcid": "4420", 00:44:44.965 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:44.965 "prchk_reftag": false, 00:44:44.965 "prchk_guard": false, 00:44:44.965 "ctrlr_loss_timeout_sec": 0, 00:44:44.965 "reconnect_delay_sec": 0, 00:44:44.965 "fast_io_fail_timeout_sec": 0, 00:44:44.965 "psk": "key0", 00:44:44.965 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:44.965 "hdgst": false, 00:44:44.965 "ddgst": false, 00:44:44.965 "multipath": "multipath" 00:44:44.965 } 00:44:44.965 }, 00:44:44.965 { 00:44:44.965 "method": "bdev_nvme_set_hotplug", 00:44:44.965 "params": { 00:44:44.965 "period_us": 100000, 00:44:44.965 "enable": false 00:44:44.965 } 00:44:44.965 }, 00:44:44.965 { 00:44:44.965 "method": "bdev_wait_for_examine" 00:44:44.965 } 00:44:44.965 ] 00:44:44.965 }, 00:44:44.965 { 00:44:44.965 "subsystem": "nbd", 00:44:44.965 "config": [] 00:44:44.965 } 00:44:44.965 ] 00:44:44.965 }' 00:44:44.965 05:44:58 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:44:44.965 05:44:58 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:44.965 05:44:58 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:44.965 05:44:58 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:44.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:44.965 05:44:58 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:44.965 05:44:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:45.224 [2024-12-15 05:44:58.690403] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:45.224 [2024-12-15 05:44:58.690453] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661172 ] 00:44:45.224 [2024-12-15 05:44:58.761123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:45.224 [2024-12-15 05:44:58.782474] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:45.483 [2024-12-15 05:44:58.938056] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:46.050 05:44:59 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:46.050 05:44:59 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:44:46.050 05:44:59 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:44:46.050 05:44:59 keyring_file -- keyring/file.sh@121 -- # jq length 00:44:46.050 05:44:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:46.051 05:44:59 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:44:46.309 05:44:59 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:44:46.309 05:44:59 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:44:46.309 05:44:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:46.309 05:44:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:46.309 05:44:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:44:46.309 05:44:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:46.309 05:44:59 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:44:46.309 05:44:59 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:44:46.309 05:44:59 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:44:46.309 05:44:59 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:44:46.309 05:44:59 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:46.309 05:44:59 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:44:46.309 05:44:59 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:46.568 05:45:00 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:44:46.568 05:45:00 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:44:46.568 05:45:00 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:44:46.568 05:45:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:44:46.827 05:45:00 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:44:46.827 05:45:00 keyring_file -- keyring/file.sh@1 -- # cleanup 00:44:46.827 05:45:00 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.8B3N2XKgVP /tmp/tmp.M5ZbnPWA2U 00:44:46.827 05:45:00 keyring_file -- keyring/file.sh@20 -- # killprocess 661172 00:44:46.827 05:45:00 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 661172 ']' 00:44:46.827 05:45:00 keyring_file -- common/autotest_common.sh@958 -- # kill -0 661172 00:44:46.827 05:45:00 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:46.827 05:45:00 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:46.827 05:45:00 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 661172 00:44:46.827 05:45:00 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:46.827 05:45:00 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:46.827 05:45:00 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 661172' 00:44:46.827 killing process with pid 661172 00:44:46.827 05:45:00 keyring_file -- common/autotest_common.sh@973 -- # kill 661172 00:44:46.827 Received shutdown signal, test time was about 1.000000 seconds 00:44:46.827 00:44:46.827 Latency(us) 00:44:46.827 [2024-12-15T04:45:00.514Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:46.827 [2024-12-15T04:45:00.514Z] =================================================================================================================== 00:44:46.827 [2024-12-15T04:45:00.514Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:44:46.827 05:45:00 keyring_file -- common/autotest_common.sh@978 -- # wait 661172 00:44:47.086 05:45:00 keyring_file -- keyring/file.sh@21 -- # killprocess 659691 00:44:47.086 05:45:00 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 659691 ']' 00:44:47.086 05:45:00 keyring_file -- common/autotest_common.sh@958 -- # kill -0 659691 00:44:47.086 05:45:00 keyring_file -- common/autotest_common.sh@959 -- # uname 00:44:47.086 05:45:00 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:47.086 05:45:00 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 659691 00:44:47.086 05:45:00 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:47.086 05:45:00 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:47.086 05:45:00 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 659691' 00:44:47.086 killing process with pid 659691 00:44:47.086 05:45:00 keyring_file -- common/autotest_common.sh@973 -- # kill 659691 00:44:47.086 05:45:00 keyring_file -- common/autotest_common.sh@978 -- # wait 659691 00:44:47.345 00:44:47.345 real 0m11.704s 00:44:47.345 user 0m29.174s 00:44:47.345 sys 0m2.678s 00:44:47.345 05:45:00 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:47.345 05:45:00 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:44:47.345 ************************************ 00:44:47.345 END TEST keyring_file 00:44:47.345 ************************************ 00:44:47.345 05:45:00 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:44:47.345 05:45:00 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:47.345 05:45:00 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:44:47.345 05:45:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:44:47.345 05:45:00 -- common/autotest_common.sh@10 -- # set +x 00:44:47.345 ************************************ 00:44:47.345 START TEST keyring_linux 00:44:47.345 ************************************ 00:44:47.345 05:45:00 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:44:47.345 Joined session keyring: 287197212 00:44:47.604 * Looking for test storage... 00:44:47.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:44:47.604 05:45:01 keyring_linux -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:44:47.604 05:45:01 keyring_linux -- common/autotest_common.sh@1711 -- # lcov --version 00:44:47.604 05:45:01 keyring_linux -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:44:47.604 05:45:01 keyring_linux -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@345 -- # : 1 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:44:47.604 05:45:01 keyring_linux -- scripts/common.sh@368 -- # return 0 00:44:47.605 05:45:01 keyring_linux -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:44:47.605 05:45:01 keyring_linux -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:44:47.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:47.605 --rc genhtml_branch_coverage=1 00:44:47.605 --rc genhtml_function_coverage=1 00:44:47.605 --rc genhtml_legend=1 00:44:47.605 --rc geninfo_all_blocks=1 00:44:47.605 --rc geninfo_unexecuted_blocks=1 00:44:47.605 00:44:47.605 ' 00:44:47.605 05:45:01 keyring_linux -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:44:47.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:47.605 --rc genhtml_branch_coverage=1 00:44:47.605 --rc genhtml_function_coverage=1 00:44:47.605 --rc genhtml_legend=1 00:44:47.605 --rc geninfo_all_blocks=1 00:44:47.605 --rc geninfo_unexecuted_blocks=1 00:44:47.605 00:44:47.605 ' 00:44:47.605 05:45:01 keyring_linux -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:44:47.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:47.605 --rc genhtml_branch_coverage=1 00:44:47.605 --rc genhtml_function_coverage=1 00:44:47.605 --rc genhtml_legend=1 00:44:47.605 --rc geninfo_all_blocks=1 00:44:47.605 --rc geninfo_unexecuted_blocks=1 00:44:47.605 00:44:47.605 ' 00:44:47.605 05:45:01 keyring_linux -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:44:47.605 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:44:47.605 --rc genhtml_branch_coverage=1 00:44:47.605 --rc genhtml_function_coverage=1 00:44:47.605 --rc genhtml_legend=1 00:44:47.605 --rc geninfo_all_blocks=1 00:44:47.605 --rc geninfo_unexecuted_blocks=1 00:44:47.605 00:44:47.605 ' 00:44:47.605 05:45:01 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80b56b8f-cbc7-e911-906e-0017a4403562 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80b56b8f-cbc7-e911-906e-0017a4403562 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@49 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:44:47.605 05:45:01 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:44:47.605 05:45:01 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:44:47.605 05:45:01 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:44:47.605 05:45:01 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:44:47.605 05:45:01 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.605 05:45:01 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.605 05:45:01 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.605 05:45:01 keyring_linux -- paths/export.sh@5 -- # export PATH 00:44:47.605 05:45:01 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@51 -- # : 0 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:44:47.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@55 -- # have_pci_nics=0 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:44:47.605 05:45:01 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:44:47.605 05:45:01 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:44:47.605 05:45:01 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:44:47.605 05:45:01 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:44:47.605 05:45:01 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:44:47.605 05:45:01 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@732 -- # key=00112233445566778899aabbccddeeff 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:44:47.605 /tmp/:spdk-test:key0 00:44:47.605 05:45:01 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:44:47.605 05:45:01 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@743 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@730 -- # local prefix key digest 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@732 -- # prefix=NVMeTLSkey-1 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@732 -- # key=112233445566778899aabbccddeeff00 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@732 -- # digest=0 00:44:47.605 05:45:01 keyring_linux -- nvmf/common.sh@733 -- # python - 00:44:47.864 05:45:01 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:44:47.864 05:45:01 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:44:47.864 /tmp/:spdk-test:key1 00:44:47.864 05:45:01 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:44:47.864 05:45:01 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=661807 00:44:47.864 05:45:01 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 661807 00:44:47.864 05:45:01 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 661807 ']' 00:44:47.864 05:45:01 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:44:47.864 05:45:01 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:47.864 05:45:01 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:44:47.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:44:47.865 05:45:01 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:47.865 05:45:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:47.865 [2024-12-15 05:45:01.330578] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:47.865 [2024-12-15 05:45:01.330624] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661807 ] 00:44:47.865 [2024-12-15 05:45:01.401250] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:47.865 [2024-12-15 05:45:01.424006] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:44:48.123 05:45:01 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:48.123 05:45:01 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:48.123 05:45:01 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:44:48.123 05:45:01 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:44:48.123 05:45:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:48.123 [2024-12-15 05:45:01.616677] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:44:48.123 null0 00:44:48.123 [2024-12-15 05:45:01.648732] tcp.c:1049:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:44:48.123 [2024-12-15 05:45:01.649046] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:44:48.123 05:45:01 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:44:48.123 05:45:01 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:44:48.123 155767050 00:44:48.123 05:45:01 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:44:48.123 960861745 00:44:48.123 05:45:01 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=661840 00:44:48.123 05:45:01 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 661840 /var/tmp/bperf.sock 00:44:48.123 05:45:01 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:44:48.123 05:45:01 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 661840 ']' 00:44:48.123 05:45:01 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:44:48.123 05:45:01 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:44:48.123 05:45:01 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:44:48.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:44:48.124 05:45:01 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:44:48.124 05:45:01 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:48.124 [2024-12-15 05:45:01.720818] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 22.11.4 initialization... 00:44:48.124 [2024-12-15 05:45:01.720863] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid661840 ] 00:44:48.124 [2024-12-15 05:45:01.794440] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:44:48.382 [2024-12-15 05:45:01.816716] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:44:48.382 05:45:01 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:44:48.382 05:45:01 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:44:48.382 05:45:01 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:44:48.382 05:45:01 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:44:48.382 05:45:02 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:44:48.382 05:45:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:44:48.641 05:45:02 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:48.641 05:45:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:44:48.900 [2024-12-15 05:45:02.483835] bdev_nvme_rpc.c: 515:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:44:48.900 nvme0n1 00:44:48.900 05:45:02 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:44:48.900 05:45:02 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:44:48.900 05:45:02 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:48.900 05:45:02 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:48.900 05:45:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:48.900 05:45:02 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:49.158 05:45:02 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:44:49.158 05:45:02 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:49.158 05:45:02 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:44:49.158 05:45:02 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:44:49.158 05:45:02 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:44:49.158 05:45:02 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:44:49.158 05:45:02 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:49.417 05:45:02 keyring_linux -- keyring/linux.sh@25 -- # sn=155767050 00:44:49.417 05:45:02 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:44:49.417 05:45:02 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:49.417 05:45:02 keyring_linux -- keyring/linux.sh@26 -- # [[ 155767050 == \1\5\5\7\6\7\0\5\0 ]] 00:44:49.417 05:45:02 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 155767050 00:44:49.417 05:45:02 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:44:49.417 05:45:02 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:44:49.417 Running I/O for 1 seconds... 00:44:50.791 21226.00 IOPS, 82.91 MiB/s 00:44:50.791 Latency(us) 00:44:50.791 [2024-12-15T04:45:04.478Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:50.791 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:44:50.791 nvme0n1 : 1.01 21228.27 82.92 0.00 0.00 6009.70 4681.14 10485.76 00:44:50.791 [2024-12-15T04:45:04.478Z] =================================================================================================================== 00:44:50.791 [2024-12-15T04:45:04.478Z] Total : 21228.27 82.92 0.00 0.00 6009.70 4681.14 10485.76 00:44:50.791 { 00:44:50.791 "results": [ 00:44:50.791 { 00:44:50.791 "job": "nvme0n1", 00:44:50.791 "core_mask": "0x2", 00:44:50.791 "workload": "randread", 00:44:50.791 "status": "finished", 00:44:50.791 "queue_depth": 128, 00:44:50.791 "io_size": 4096, 00:44:50.791 "runtime": 1.00597, 00:44:50.791 "iops": 21228.267244550036, 00:44:50.791 "mibps": 82.92291892402358, 00:44:50.791 "io_failed": 0, 00:44:50.791 "io_timeout": 0, 00:44:50.791 "avg_latency_us": 6009.700375199296, 00:44:50.791 "min_latency_us": 4681.142857142857, 00:44:50.791 "max_latency_us": 10485.76 00:44:50.791 } 00:44:50.791 ], 00:44:50.791 "core_count": 1 00:44:50.791 } 00:44:50.791 05:45:04 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:44:50.791 05:45:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:44:50.791 05:45:04 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:44:50.791 05:45:04 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:44:50.791 05:45:04 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:44:50.791 05:45:04 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:44:50.791 05:45:04 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:44:50.791 05:45:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:44:51.049 05:45:04 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:44:51.049 05:45:04 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:44:51.049 05:45:04 keyring_linux -- keyring/linux.sh@23 -- # return 00:44:51.049 05:45:04 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:51.049 05:45:04 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:44:51.049 05:45:04 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:51.049 05:45:04 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:44:51.050 05:45:04 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:51.050 05:45:04 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:44:51.050 05:45:04 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:44:51.050 05:45:04 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:51.050 05:45:04 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:44:51.050 [2024-12-15 05:45:04.687673] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:44:51.050 [2024-12-15 05:45:04.688237] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb76700 (107): Transport endpoint is not connected 00:44:51.050 [2024-12-15 05:45:04.689232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xb76700 (9): Bad file descriptor 00:44:51.050 [2024-12-15 05:45:04.690233] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:44:51.050 [2024-12-15 05:45:04.690244] nvme.c: 709:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:44:51.050 [2024-12-15 05:45:04.690251] nvme.c: 895:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:44:51.050 [2024-12-15 05:45:04.690259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:44:51.050 request: 00:44:51.050 { 00:44:51.050 "name": "nvme0", 00:44:51.050 "trtype": "tcp", 00:44:51.050 "traddr": "127.0.0.1", 00:44:51.050 "adrfam": "ipv4", 00:44:51.050 "trsvcid": "4420", 00:44:51.050 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:44:51.050 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:44:51.050 "prchk_reftag": false, 00:44:51.050 "prchk_guard": false, 00:44:51.050 "hdgst": false, 00:44:51.050 "ddgst": false, 00:44:51.050 "psk": ":spdk-test:key1", 00:44:51.050 "allow_unrecognized_csi": false, 00:44:51.050 "method": "bdev_nvme_attach_controller", 00:44:51.050 "req_id": 1 00:44:51.050 } 00:44:51.050 Got JSON-RPC error response 00:44:51.050 response: 00:44:51.050 { 00:44:51.050 "code": -5, 00:44:51.050 "message": "Input/output error" 00:44:51.050 } 00:44:51.050 05:45:04 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:44:51.050 05:45:04 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:44:51.050 05:45:04 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:44:51.050 05:45:04 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@33 -- # sn=155767050 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 155767050 00:44:51.050 1 links removed 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@33 -- # sn=960861745 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 960861745 00:44:51.050 1 links removed 00:44:51.050 05:45:04 keyring_linux -- keyring/linux.sh@41 -- # killprocess 661840 00:44:51.050 05:45:04 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 661840 ']' 00:44:51.050 05:45:04 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 661840 00:44:51.050 05:45:04 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:51.050 05:45:04 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:51.050 05:45:04 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 661840 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 661840' 00:44:51.308 killing process with pid 661840 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@973 -- # kill 661840 00:44:51.308 Received shutdown signal, test time was about 1.000000 seconds 00:44:51.308 00:44:51.308 Latency(us) 00:44:51.308 [2024-12-15T04:45:04.995Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:44:51.308 [2024-12-15T04:45:04.995Z] =================================================================================================================== 00:44:51.308 [2024-12-15T04:45:04.995Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@978 -- # wait 661840 00:44:51.308 05:45:04 keyring_linux -- keyring/linux.sh@42 -- # killprocess 661807 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 661807 ']' 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 661807 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 661807 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 661807' 00:44:51.308 killing process with pid 661807 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@973 -- # kill 661807 00:44:51.308 05:45:04 keyring_linux -- common/autotest_common.sh@978 -- # wait 661807 00:44:51.876 00:44:51.876 real 0m4.285s 00:44:51.876 user 0m8.121s 00:44:51.876 sys 0m1.454s 00:44:51.876 05:45:05 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:44:51.876 05:45:05 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:44:51.876 ************************************ 00:44:51.876 END TEST keyring_linux 00:44:51.876 ************************************ 00:44:51.876 05:45:05 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:44:51.876 05:45:05 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:44:51.876 05:45:05 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:44:51.876 05:45:05 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:44:51.876 05:45:05 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:44:51.876 05:45:05 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:44:51.876 05:45:05 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:44:51.876 05:45:05 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:44:51.876 05:45:05 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:44:51.876 05:45:05 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:44:51.876 05:45:05 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:44:51.876 05:45:05 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:44:51.876 05:45:05 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:44:51.876 05:45:05 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:44:51.876 05:45:05 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:44:51.876 05:45:05 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:44:51.876 05:45:05 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:44:51.876 05:45:05 -- common/autotest_common.sh@726 -- # xtrace_disable 00:44:51.876 05:45:05 -- common/autotest_common.sh@10 -- # set +x 00:44:51.876 05:45:05 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:44:51.876 05:45:05 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:44:51.876 05:45:05 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:44:51.876 05:45:05 -- common/autotest_common.sh@10 -- # set +x 00:44:57.147 INFO: APP EXITING 00:44:57.147 INFO: killing all VMs 00:44:57.147 INFO: killing vhost app 00:44:57.147 INFO: EXIT DONE 00:44:59.868 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:44:59.868 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:44:59.868 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:44:59.868 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:44:59.868 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:44:59.868 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:44:59.868 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:44:59.868 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:45:00.126 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:45:00.126 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:45:00.126 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:45:00.126 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:45:00.126 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:45:00.126 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:45:00.126 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:45:00.126 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:45:00.126 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:45:03.413 Cleaning 00:45:03.413 Removing: /var/run/dpdk/spdk0/config 00:45:03.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:45:03.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:45:03.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:45:03.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:45:03.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:45:03.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:45:03.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:45:03.413 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:45:03.413 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:45:03.413 Removing: /var/run/dpdk/spdk0/hugepage_info 00:45:03.413 Removing: /var/run/dpdk/spdk1/config 00:45:03.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:45:03.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:45:03.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:45:03.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:45:03.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:45:03.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:45:03.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:45:03.413 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:45:03.413 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:45:03.413 Removing: /var/run/dpdk/spdk1/hugepage_info 00:45:03.413 Removing: /var/run/dpdk/spdk2/config 00:45:03.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:45:03.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:45:03.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:45:03.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:45:03.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:45:03.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:45:03.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:45:03.413 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:45:03.413 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:45:03.413 Removing: /var/run/dpdk/spdk2/hugepage_info 00:45:03.413 Removing: /var/run/dpdk/spdk3/config 00:45:03.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:45:03.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:45:03.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:45:03.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:45:03.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:45:03.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:45:03.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:45:03.413 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:45:03.413 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:45:03.413 Removing: /var/run/dpdk/spdk3/hugepage_info 00:45:03.413 Removing: /var/run/dpdk/spdk4/config 00:45:03.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:45:03.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:45:03.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:45:03.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:45:03.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:45:03.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:45:03.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:45:03.413 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:45:03.413 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:45:03.413 Removing: /var/run/dpdk/spdk4/hugepage_info 00:45:03.414 Removing: /dev/shm/bdev_svc_trace.1 00:45:03.414 Removing: /dev/shm/nvmf_trace.0 00:45:03.414 Removing: /dev/shm/spdk_tgt_trace.pid104088 00:45:03.414 Removing: /var/run/dpdk/spdk0 00:45:03.414 Removing: /var/run/dpdk/spdk1 00:45:03.414 Removing: /var/run/dpdk/spdk2 00:45:03.414 Removing: /var/run/dpdk/spdk3 00:45:03.414 Removing: /var/run/dpdk/spdk4 00:45:03.414 Removing: /var/run/dpdk/spdk_pid101998 00:45:03.414 Removing: /var/run/dpdk/spdk_pid103025 00:45:03.414 Removing: /var/run/dpdk/spdk_pid104088 00:45:03.414 Removing: /var/run/dpdk/spdk_pid104711 00:45:03.414 Removing: /var/run/dpdk/spdk_pid105633 00:45:03.414 Removing: /var/run/dpdk/spdk_pid105757 00:45:03.414 Removing: /var/run/dpdk/spdk_pid106792 00:45:03.414 Removing: /var/run/dpdk/spdk_pid106820 00:45:03.414 Removing: /var/run/dpdk/spdk_pid107166 00:45:03.414 Removing: /var/run/dpdk/spdk_pid108641 00:45:03.414 Removing: /var/run/dpdk/spdk_pid109969 00:45:03.414 Removing: /var/run/dpdk/spdk_pid110385 00:45:03.414 Removing: /var/run/dpdk/spdk_pid110591 00:45:03.414 Removing: /var/run/dpdk/spdk_pid110772 00:45:03.414 Removing: /var/run/dpdk/spdk_pid111045 00:45:03.414 Removing: /var/run/dpdk/spdk_pid111321 00:45:03.414 Removing: /var/run/dpdk/spdk_pid111645 00:45:03.414 Removing: /var/run/dpdk/spdk_pid111943 00:45:03.414 Removing: /var/run/dpdk/spdk_pid112948 00:45:03.414 Removing: /var/run/dpdk/spdk_pid116034 00:45:03.414 Removing: /var/run/dpdk/spdk_pid116250 00:45:03.414 Removing: /var/run/dpdk/spdk_pid116500 00:45:03.414 Removing: /var/run/dpdk/spdk_pid116510 00:45:03.414 Removing: /var/run/dpdk/spdk_pid116986 00:45:03.414 Removing: /var/run/dpdk/spdk_pid117000 00:45:03.414 Removing: /var/run/dpdk/spdk_pid117469 00:45:03.414 Removing: /var/run/dpdk/spdk_pid117475 00:45:03.414 Removing: /var/run/dpdk/spdk_pid117821 00:45:03.414 Removing: /var/run/dpdk/spdk_pid117951 00:45:03.414 Removing: /var/run/dpdk/spdk_pid118137 00:45:03.414 Removing: /var/run/dpdk/spdk_pid118206 00:45:03.414 Removing: /var/run/dpdk/spdk_pid118714 00:45:03.414 Removing: /var/run/dpdk/spdk_pid118879 00:45:03.414 Removing: /var/run/dpdk/spdk_pid119205 00:45:03.414 Removing: /var/run/dpdk/spdk_pid122948 00:45:03.414 Removing: /var/run/dpdk/spdk_pid127133 00:45:03.414 Removing: /var/run/dpdk/spdk_pid137151 00:45:03.414 Removing: /var/run/dpdk/spdk_pid137823 00:45:03.414 Removing: /var/run/dpdk/spdk_pid142016 00:45:03.414 Removing: /var/run/dpdk/spdk_pid142252 00:45:03.414 Removing: /var/run/dpdk/spdk_pid146445 00:45:03.414 Removing: /var/run/dpdk/spdk_pid152344 00:45:03.414 Removing: /var/run/dpdk/spdk_pid154950 00:45:03.414 Removing: /var/run/dpdk/spdk_pid165578 00:45:03.414 Removing: /var/run/dpdk/spdk_pid174435 00:45:03.414 Removing: /var/run/dpdk/spdk_pid176217 00:45:03.414 Removing: /var/run/dpdk/spdk_pid177124 00:45:03.414 Removing: /var/run/dpdk/spdk_pid193889 00:45:03.414 Removing: /var/run/dpdk/spdk_pid197665 00:45:03.414 Removing: /var/run/dpdk/spdk_pid279496 00:45:03.414 Removing: /var/run/dpdk/spdk_pid284893 00:45:03.414 Removing: /var/run/dpdk/spdk_pid291144 00:45:03.414 Removing: /var/run/dpdk/spdk_pid297388 00:45:03.414 Removing: /var/run/dpdk/spdk_pid297471 00:45:03.414 Removing: /var/run/dpdk/spdk_pid298210 00:45:03.414 Removing: /var/run/dpdk/spdk_pid299063 00:45:03.414 Removing: /var/run/dpdk/spdk_pid299956 00:45:03.414 Removing: /var/run/dpdk/spdk_pid300533 00:45:03.414 Removing: /var/run/dpdk/spdk_pid300619 00:45:03.414 Removing: /var/run/dpdk/spdk_pid300850 00:45:03.414 Removing: /var/run/dpdk/spdk_pid300925 00:45:03.414 Removing: /var/run/dpdk/spdk_pid301080 00:45:03.414 Removing: /var/run/dpdk/spdk_pid301863 00:45:03.414 Removing: /var/run/dpdk/spdk_pid302657 00:45:03.414 Removing: /var/run/dpdk/spdk_pid303555 00:45:03.414 Removing: /var/run/dpdk/spdk_pid304200 00:45:03.414 Removing: /var/run/dpdk/spdk_pid304231 00:45:03.414 Removing: /var/run/dpdk/spdk_pid304460 00:45:03.414 Removing: /var/run/dpdk/spdk_pid305479 00:45:03.414 Removing: /var/run/dpdk/spdk_pid306473 00:45:03.414 Removing: /var/run/dpdk/spdk_pid314585 00:45:03.414 Removing: /var/run/dpdk/spdk_pid343519 00:45:03.414 Removing: /var/run/dpdk/spdk_pid347854 00:45:03.414 Removing: /var/run/dpdk/spdk_pid349587 00:45:03.414 Removing: /var/run/dpdk/spdk_pid351250 00:45:03.414 Removing: /var/run/dpdk/spdk_pid351424 00:45:03.414 Removing: /var/run/dpdk/spdk_pid351650 00:45:03.414 Removing: /var/run/dpdk/spdk_pid351676 00:45:03.414 Removing: /var/run/dpdk/spdk_pid352177 00:45:03.414 Removing: /var/run/dpdk/spdk_pid353966 00:45:03.414 Removing: /var/run/dpdk/spdk_pid354710 00:45:03.414 Removing: /var/run/dpdk/spdk_pid355195 00:45:03.414 Removing: /var/run/dpdk/spdk_pid357361 00:45:03.414 Removing: /var/run/dpdk/spdk_pid358216 00:45:03.673 Removing: /var/run/dpdk/spdk_pid358929 00:45:03.673 Removing: /var/run/dpdk/spdk_pid362906 00:45:03.673 Removing: /var/run/dpdk/spdk_pid368174 00:45:03.673 Removing: /var/run/dpdk/spdk_pid368175 00:45:03.673 Removing: /var/run/dpdk/spdk_pid368176 00:45:03.673 Removing: /var/run/dpdk/spdk_pid372055 00:45:03.673 Removing: /var/run/dpdk/spdk_pid375789 00:45:03.673 Removing: /var/run/dpdk/spdk_pid380657 00:45:03.673 Removing: /var/run/dpdk/spdk_pid415925 00:45:03.673 Removing: /var/run/dpdk/spdk_pid419944 00:45:03.673 Removing: /var/run/dpdk/spdk_pid425828 00:45:03.673 Removing: /var/run/dpdk/spdk_pid427101 00:45:03.673 Removing: /var/run/dpdk/spdk_pid428388 00:45:03.673 Removing: /var/run/dpdk/spdk_pid429682 00:45:03.673 Removing: /var/run/dpdk/spdk_pid434193 00:45:03.673 Removing: /var/run/dpdk/spdk_pid438388 00:45:03.673 Removing: /var/run/dpdk/spdk_pid442799 00:45:03.673 Removing: /var/run/dpdk/spdk_pid450166 00:45:03.673 Removing: /var/run/dpdk/spdk_pid450270 00:45:03.673 Removing: /var/run/dpdk/spdk_pid454674 00:45:03.673 Removing: /var/run/dpdk/spdk_pid454903 00:45:03.673 Removing: /var/run/dpdk/spdk_pid455126 00:45:03.673 Removing: /var/run/dpdk/spdk_pid455564 00:45:03.673 Removing: /var/run/dpdk/spdk_pid455569 00:45:03.673 Removing: /var/run/dpdk/spdk_pid456931 00:45:03.673 Removing: /var/run/dpdk/spdk_pid458490 00:45:03.673 Removing: /var/run/dpdk/spdk_pid460187 00:45:03.673 Removing: /var/run/dpdk/spdk_pid461825 00:45:03.673 Removing: /var/run/dpdk/spdk_pid463380 00:45:03.673 Removing: /var/run/dpdk/spdk_pid465036 00:45:03.673 Removing: /var/run/dpdk/spdk_pid470881 00:45:03.673 Removing: /var/run/dpdk/spdk_pid471440 00:45:03.673 Removing: /var/run/dpdk/spdk_pid473137 00:45:03.673 Removing: /var/run/dpdk/spdk_pid474158 00:45:03.673 Removing: /var/run/dpdk/spdk_pid479789 00:45:03.673 Removing: /var/run/dpdk/spdk_pid482872 00:45:03.673 Removing: /var/run/dpdk/spdk_pid488222 00:45:03.673 Removing: /var/run/dpdk/spdk_pid493445 00:45:03.673 Removing: /var/run/dpdk/spdk_pid501867 00:45:03.673 Removing: /var/run/dpdk/spdk_pid508913 00:45:03.673 Removing: /var/run/dpdk/spdk_pid508915 00:45:03.673 Removing: /var/run/dpdk/spdk_pid527765 00:45:03.673 Removing: /var/run/dpdk/spdk_pid528352 00:45:03.673 Removing: /var/run/dpdk/spdk_pid528817 00:45:03.673 Removing: /var/run/dpdk/spdk_pid529873 00:45:03.673 Removing: /var/run/dpdk/spdk_pid530400 00:45:03.673 Removing: /var/run/dpdk/spdk_pid531068 00:45:03.673 Removing: /var/run/dpdk/spdk_pid531532 00:45:03.673 Removing: /var/run/dpdk/spdk_pid532000 00:45:03.673 Removing: /var/run/dpdk/spdk_pid536159 00:45:03.673 Removing: /var/run/dpdk/spdk_pid536388 00:45:03.673 Removing: /var/run/dpdk/spdk_pid542323 00:45:03.673 Removing: /var/run/dpdk/spdk_pid542377 00:45:03.673 Removing: /var/run/dpdk/spdk_pid547755 00:45:03.673 Removing: /var/run/dpdk/spdk_pid551909 00:45:03.673 Removing: /var/run/dpdk/spdk_pid561349 00:45:03.673 Removing: /var/run/dpdk/spdk_pid561858 00:45:03.673 Removing: /var/run/dpdk/spdk_pid566031 00:45:03.673 Removing: /var/run/dpdk/spdk_pid566263 00:45:03.673 Removing: /var/run/dpdk/spdk_pid570238 00:45:03.673 Removing: /var/run/dpdk/spdk_pid576468 00:45:03.673 Removing: /var/run/dpdk/spdk_pid578979 00:45:03.673 Removing: /var/run/dpdk/spdk_pid588714 00:45:03.673 Removing: /var/run/dpdk/spdk_pid597226 00:45:03.673 Removing: /var/run/dpdk/spdk_pid598996 00:45:03.673 Removing: /var/run/dpdk/spdk_pid599865 00:45:03.673 Removing: /var/run/dpdk/spdk_pid615589 00:45:03.673 Removing: /var/run/dpdk/spdk_pid619377 00:45:03.673 Removing: /var/run/dpdk/spdk_pid622605 00:45:03.673 Removing: /var/run/dpdk/spdk_pid630180 00:45:03.932 Removing: /var/run/dpdk/spdk_pid630185 00:45:03.932 Removing: /var/run/dpdk/spdk_pid635132 00:45:03.932 Removing: /var/run/dpdk/spdk_pid637042 00:45:03.932 Removing: /var/run/dpdk/spdk_pid638965 00:45:03.932 Removing: /var/run/dpdk/spdk_pid640084 00:45:03.932 Removing: /var/run/dpdk/spdk_pid642108 00:45:03.932 Removing: /var/run/dpdk/spdk_pid643143 00:45:03.932 Removing: /var/run/dpdk/spdk_pid651715 00:45:03.932 Removing: /var/run/dpdk/spdk_pid652161 00:45:03.932 Removing: /var/run/dpdk/spdk_pid652614 00:45:03.932 Removing: /var/run/dpdk/spdk_pid654828 00:45:03.932 Removing: /var/run/dpdk/spdk_pid655299 00:45:03.932 Removing: /var/run/dpdk/spdk_pid655842 00:45:03.932 Removing: /var/run/dpdk/spdk_pid659691 00:45:03.932 Removing: /var/run/dpdk/spdk_pid659696 00:45:03.932 Removing: /var/run/dpdk/spdk_pid661172 00:45:03.932 Removing: /var/run/dpdk/spdk_pid661807 00:45:03.932 Removing: /var/run/dpdk/spdk_pid661840 00:45:03.932 Clean 00:45:03.932 05:45:17 -- common/autotest_common.sh@1453 -- # return 0 00:45:03.932 05:45:17 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:45:03.932 05:45:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:03.932 05:45:17 -- common/autotest_common.sh@10 -- # set +x 00:45:03.932 05:45:17 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:45:03.932 05:45:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:45:03.932 05:45:17 -- common/autotest_common.sh@10 -- # set +x 00:45:03.932 05:45:17 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:03.932 05:45:17 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:45:03.932 05:45:17 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:45:03.932 05:45:17 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:45:03.932 05:45:17 -- spdk/autotest.sh@398 -- # hostname 00:45:03.932 05:45:17 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-04 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:45:04.190 geninfo: WARNING: invalid characters removed from testname! 00:45:26.124 05:45:38 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:28.028 05:45:41 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:29.932 05:45:43 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:31.310 05:45:44 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:33.213 05:45:46 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:35.118 05:45:48 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:45:37.022 05:45:50 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:45:37.022 05:45:50 -- spdk/autorun.sh@1 -- $ timing_finish 00:45:37.022 05:45:50 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:45:37.022 05:45:50 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:45:37.022 05:45:50 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:45:37.022 05:45:50 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:45:37.022 + [[ -n 7537 ]] 00:45:37.022 + sudo kill 7537 00:45:37.033 [Pipeline] } 00:45:37.047 [Pipeline] // stage 00:45:37.052 [Pipeline] } 00:45:37.066 [Pipeline] // timeout 00:45:37.071 [Pipeline] } 00:45:37.084 [Pipeline] // catchError 00:45:37.089 [Pipeline] } 00:45:37.104 [Pipeline] // wrap 00:45:37.110 [Pipeline] } 00:45:37.122 [Pipeline] // catchError 00:45:37.131 [Pipeline] stage 00:45:37.133 [Pipeline] { (Epilogue) 00:45:37.146 [Pipeline] catchError 00:45:37.147 [Pipeline] { 00:45:37.160 [Pipeline] echo 00:45:37.161 Cleanup processes 00:45:37.167 [Pipeline] sh 00:45:37.455 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:37.455 673965 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:37.469 [Pipeline] sh 00:45:37.755 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:45:37.755 ++ grep -v 'sudo pgrep' 00:45:37.755 ++ awk '{print $1}' 00:45:37.755 + sudo kill -9 00:45:37.755 + true 00:45:37.767 [Pipeline] sh 00:45:38.052 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:45:50.276 [Pipeline] sh 00:45:50.562 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:45:50.562 Artifacts sizes are good 00:45:50.576 [Pipeline] archiveArtifacts 00:45:50.583 Archiving artifacts 00:45:50.984 [Pipeline] sh 00:45:51.269 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:45:51.283 [Pipeline] cleanWs 00:45:51.292 [WS-CLEANUP] Deleting project workspace... 00:45:51.292 [WS-CLEANUP] Deferred wipeout is used... 00:45:51.298 [WS-CLEANUP] done 00:45:51.300 [Pipeline] } 00:45:51.316 [Pipeline] // catchError 00:45:51.326 [Pipeline] sh 00:45:51.612 + logger -p user.info -t JENKINS-CI 00:45:51.636 [Pipeline] } 00:45:51.648 [Pipeline] // stage 00:45:51.652 [Pipeline] } 00:45:51.664 [Pipeline] // node 00:45:51.667 [Pipeline] End of Pipeline 00:45:51.700 Finished: SUCCESS